ComfyUI > Nodes > Comfy-WaveSpeed

ComfyUI Extension: Comfy-WaveSpeed

Repo Name

Comfy-WaveSpeed

Author
chengzeyi (Account age: 3417 days)
Nodes
View all nodes(7)
Latest Updated
2026-03-26
Github Stars
1.23K

How to Install Comfy-WaveSpeed

Install this extension via the ComfyUI Manager by searching for Comfy-WaveSpeed
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfy-WaveSpeed in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Comfy-WaveSpeed Description

Comfy-WaveSpeed enhances ComfyUI by optimizing workflow speed and efficiency. It streamlines processes, reduces latency, and improves user experience, making ComfyUI more responsive and effective.

Comfy-WaveSpeed Introduction

Comfy-WaveSpeed is an innovative extension designed to optimize the performance of ComfyUI, a popular interface for AI-driven creative workflows. This extension focuses on enhancing the speed and efficiency of model inference, making it particularly beneficial for AI artists who work with complex models and large datasets. By implementing advanced caching techniques and leveraging enhanced compilation methods, Comfy-WaveSpeed significantly reduces computation time, allowing artists to focus more on their creative process rather than waiting for models to process data.

How Comfy-WaveSpeed Works

At its core, Comfy-WaveSpeed employs a technique known as "First Block Cache" (FBCache), inspired by caching algorithms like TeaCache. This method uses the output of the first transformer block in a model as a cache indicator. If the difference between the current and previous outputs is minimal, the extension reuses the previous output, skipping the computation of subsequent blocks. This approach can double the speed of model inference while maintaining high accuracy. Additionally, Comfy-WaveSpeed enhances the torch.compile function, optimizing the model's execution graph to further accelerate processing.

Comfy-WaveSpeed Features

  • Dynamic Caching (First Block Cache): This feature allows the reuse of computation results from the first transformer block, significantly speeding up the process. By adjusting the residual_diff_threshold, users can balance between speed and accuracy.
  • Enhanced torch.compile: This feature optimizes the model's execution graph, reducing unnecessary computations and improving overall performance. It supports various modes like max-autotune to tailor the compilation process to specific needs.
  • Multi-GPU Inference (Upcoming): This feature will enable the distribution of computations across multiple GPUs, further enhancing processing speed and efficiency.

Comfy-WaveSpeed Models

Comfy-WaveSpeed supports a variety of models, each benefiting from the extension's optimization techniques:

  • FLUX: Ideal for image generation tasks, this model can achieve significant speedups with FBCache.
  • LTXV and HunyuanVideo: These models are optimized for video processing, benefiting from both FBCache and enhanced compilation.
  • SD3.5 and SDXL: These models are used for high-resolution image generation, where speed and accuracy are crucial.

What's New with Comfy-WaveSpeed

The latest updates to Comfy-WaveSpeed include the introduction of the First Block Cache and enhancements to the torch.compile function. These updates are designed to provide AI artists with faster and more efficient tools, reducing the time spent on model inference and allowing more focus on creative tasks.

Troubleshooting Comfy-WaveSpeed

If you encounter issues while using Comfy-WaveSpeed, here are some common solutions:

  • Compilation Issues: If the Compile Model+ node causes problems, try removing it and using only the Apply First Block Cache node. This can still provide significant speed improvements.
  • Frequent Recompilation: Enable the dynamic option in the Compile Model+ node or launch ComfyUI with the TORCH_LOGS=recompiles_verbose environment variable to diagnose recompilation issues.
  • Compatibility Issues: Note that the SDXL First Block Cache is incompatible with the FreeU Advanced node pack.

Learn More about Comfy-WaveSpeed

For further assistance and resources, consider exploring the following:

  • Comfy Registry for additional nodes and extensions.
  • Discord Server for community support and discussions.
  • ParaAttention Documentation for detailed technical insights and advanced usage scenarios. By leveraging these resources, AI artists can maximize the potential of Comfy-WaveSpeed in their creative workflows.

Comfy-WaveSpeed Related Nodes

RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Comfy-WaveSpeed detailed guide | ComfyUI