ComfyUI > Nodes > ComfyUI_Patches_ll > ApplyTeaCachePatchAdvanced

ComfyUI Node: ApplyTeaCachePatchAdvanced

Class Name

ApplyTeaCachePatchAdvanced

Category
patches/speed
Author
lldacing (Account age: 2416days)
Extension
ComfyUI_Patches_ll
Latest Updated
2025-04-08
Github Stars
0.1K

How to Install ComfyUI_Patches_ll

Install this extension via the ComfyUI Manager by searching for ComfyUI_Patches_ll
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI_Patches_ll in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

ApplyTeaCachePatchAdvanced Description

Enhance AI model performance with TeaCache patch acceleration for optimized processing speed and caching mechanism fine-tuning.

ApplyTeaCachePatchAdvanced:

The ApplyTeaCachePatchAdvanced node is designed to enhance the performance of specific AI models by applying the TeaCache patch, which accelerates model processing. This node is particularly effective when used in conjunction with nodes that have the suffix ForwardOverrider, and it is specifically tailored for models such as Flux, HunYuanVideo, LTXVideo, WanVideo, and MochiVideo. By optimizing the caching mechanism, this node helps in reducing computational overhead, thereby speeding up the model's execution. The advanced patching method allows for fine-tuning of the caching process, providing flexibility in how the cache is managed and applied across different stages of model execution. This node is essential for users looking to optimize the performance of their AI models, especially in scenarios where processing speed is critical.

ApplyTeaCachePatchAdvanced Input Parameters:

model

The model parameter represents the AI model to which the TeaCache patch will be applied. This parameter is crucial as it determines the specific model instance that will undergo performance enhancement through caching. The model must be compatible with the TeaCache patch, specifically being one of the supported types like Flux, HunYuanVideo, LTXVideo, WanVideo, or MochiVideo.

rel_l1_thresh

The rel_l1_thresh parameter is a threshold value that influences the caching mechanism's sensitivity. It determines the relative L1 norm threshold for deciding when to cache certain computations. Adjusting this value can impact the balance between computational speed and accuracy, with lower values potentially increasing caching frequency and higher values reducing it.

cache_device

The cache_device parameter specifies the device on which the cache will be stored. By default, it is set to "offload_device", which means the cache is offloaded to a secondary device to free up primary resources. This parameter allows users to manage device resources effectively, especially in environments with multiple processing units.

wan_coefficients

The wan_coefficients parameter is an optional setting that affects the caching behavior for WanVideo models. When set to "disabled", it may lead to instability in the initial steps of processing. This parameter provides additional control over the caching process, particularly for users working with WanVideo models.

ApplyTeaCachePatchAdvanced Output Parameters:

model

The output model is the same AI model provided as input, but with the TeaCache patch applied. This patched model is optimized for faster execution, benefiting from the caching enhancements introduced by the node. The output model retains its original functionality while gaining improved performance characteristics.

ApplyTeaCachePatchAdvanced Usage Tips:

  • Ensure that the model you are applying the patch to is one of the supported types (Flux, HunYuanVideo, LTXVideo, WanVideo, or MochiVideo) to fully benefit from the caching enhancements.
  • Adjust the rel_l1_thresh parameter to find the optimal balance between speed and accuracy for your specific use case. Lower values may increase speed but could affect precision.
  • Consider the device configuration when setting the cache_device parameter to optimize resource usage, especially in multi-device environments.

ApplyTeaCachePatchAdvanced Common Errors and Solutions:

TeaCache patch is not applied because the model is not supported.

  • Explanation: This error occurs when the model provided is not one of the supported types for the TeaCache patch.
  • Solution: Verify that the model is either Flux, HunYuanVideo, LTXVideo, WanVideo, or MochiVideo before applying the patch.

Unstable results in initial steps when wan_coefficients is disabled.

  • Explanation: Disabling wan_coefficients for WanVideo models can lead to instability in the initial processing steps.
  • Solution: Consider enabling wan_coefficients or adjusting the start_at parameter to mitigate instability in the initial steps.

ApplyTeaCachePatchAdvanced Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI_Patches_ll
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.