ComfyUI > Nodes > ComfyUI-HunyuanVideoWrapper > HunyuanVideo Torch Compile Settings

ComfyUI Node: HunyuanVideo Torch Compile Settings

Class Name

HyVideoTorchCompileSettings

Category
HunyuanVideoWrapper
Author
kijai (Account age: 2506days)
Extension
ComfyUI-HunyuanVideoWrapper
Latest Updated
2025-05-12
Github Stars
2.4K

How to Install ComfyUI-HunyuanVideoWrapper

Install this extension via the ComfyUI Manager by searching for ComfyUI-HunyuanVideoWrapper
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-HunyuanVideoWrapper in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

HunyuanVideo Torch Compile Settings Description

Optimize video processing model performance through torch.compile settings configuration for enhanced efficiency and speed.

HunyuanVideo Torch Compile Settings:

The HyVideoTorchCompileSettings node is designed to optimize the performance of video processing models by configuring the torch.compile settings. This node is particularly useful when connected to a model loader, as it attempts to compile selected layers of the model using the specified settings. The primary goal of this node is to enhance the efficiency and speed of model execution by leveraging advanced compilation techniques. It requires Triton and recommends using PyTorch version 2.5.0 for optimal performance. By fine-tuning the compilation settings, you can achieve significant improvements in processing time, making it an essential tool for AI artists working with complex video models.

HunyuanVideo Torch Compile Settings Input Parameters:

backend

The backend parameter specifies the compilation backend to be used for the model. It determines how the model layers are compiled and optimized. Common options include "inductor" and "cudagraphs", each offering different performance characteristics. Choosing the right backend can significantly impact the speed and efficiency of the model execution.

fullgraph

The fullgraph parameter indicates whether the entire computation graph should be compiled. Enabling this option can lead to more comprehensive optimizations but may increase compilation time. It is useful for models where full graph optimization can yield better performance.

mode

The mode parameter defines the compilation mode, which can affect the level of optimization applied. Different modes may prioritize speed, memory usage, or a balance of both. Selecting the appropriate mode can help tailor the compilation process to your specific needs.

dynamic

The dynamic parameter controls whether dynamic shapes are supported during compilation. Enabling dynamic shapes allows for more flexible model execution but may reduce the level of optimization achievable. This is useful for models that need to handle varying input sizes.

dynamo_cache_size_limit

The dynamo_cache_size_limit parameter sets a limit on the cache size used during the compilation process. This can help manage memory usage and prevent excessive resource consumption during model execution.

compile_single_blocks

The compile_single_blocks parameter specifies whether single blocks of the model should be compiled. This can be useful for targeting specific parts of the model for optimization, potentially improving execution speed for those sections.

compile_double_blocks

The compile_double_blocks parameter indicates whether double blocks of the model should be compiled. Similar to single blocks, this allows for targeted optimization of specific model components, which can enhance overall performance.

compile_txt_in

The compile_txt_in parameter determines if text input layers should be compiled. This is particularly relevant for models that process text data, as it can lead to faster text processing and improved model efficiency.

compile_vector_in

The compile_vector_in parameter specifies whether vector input layers should be compiled. Compiling these layers can optimize the handling of vector data, resulting in quicker processing times.

compile_final_layer

The compile_final_layer parameter indicates whether the final layer of the model should be compiled. This can be beneficial for models where the final layer is a performance bottleneck, as it can lead to faster output generation.

HunyuanVideo Torch Compile Settings Output Parameters:

compile_args

The compile_args output parameter is a dictionary containing all the compilation settings specified by the input parameters. This output provides a comprehensive overview of the configuration used for compiling the model, allowing you to verify and adjust settings as needed for optimal performance.

HunyuanVideo Torch Compile Settings Usage Tips:

  • Ensure that you have Triton installed and are using PyTorch version 2.5.0 to take full advantage of the node's capabilities.
  • Experiment with different backend options to find the one that offers the best performance for your specific model and hardware setup.
  • Use the fullgraph option for models that can benefit from comprehensive graph-level optimizations, but be mindful of the increased compilation time.

HunyuanVideo Torch Compile Settings Common Errors and Solutions:

"Unsupported backend selected"

  • Explanation: The chosen backend is not supported by the current setup.
  • Solution: Verify that the backend is correctly specified and supported by your PyTorch installation. Consider using "inductor" or "cudagraphs" as alternatives.

"Compilation failed due to dynamic shape constraints"

  • Explanation: The model's dynamic shapes are not compatible with the current compilation settings.
  • Solution: Disable the dynamic parameter if dynamic shapes are not necessary, or adjust the model to ensure compatibility with dynamic shape compilation.

"Exceeded dynamo cache size limit"

  • Explanation: The cache size limit set by dynamo_cache_size_limit has been exceeded during compilation.
  • Solution: Increase the dynamo_cache_size_limit to accommodate the model's requirements, or optimize the model to reduce cache usage.

HunyuanVideo Torch Compile Settings Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-HunyuanVideoWrapper
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.