ComfyUI > Nodes > ComfyUI-TorchCompileSpeed

ComfyUI Extension: ComfyUI-TorchCompileSpeed

Repo Name

ComfyUI-TorchCompileSpeed

Author
eddyhhlure1Eddy (Account age: 397 days)
Nodes
View all nodes(2)
Latest Updated
2025-10-11
Github Stars
0.02K

How to Install ComfyUI-TorchCompileSpeed

Install this extension via the ComfyUI Manager by searching for ComfyUI-TorchCompileSpeed
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-TorchCompileSpeed in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

ComfyUI-TorchCompileSpeed Description

ComfyUI-TorchCompileSpeed is a standalone optimization node for ComfyUI, utilizing torch.compile with presets designed to enhance processing speed.

ComfyUI-TorchCompileSpeed Introduction

ComfyUI-TorchCompileSpeed is an innovative extension designed to enhance the performance of AI models by optimizing the way they are compiled and executed. This extension is particularly beneficial for AI artists who work with complex models and require efficient processing to achieve faster results. By integrating seamlessly with the WanVideo Cython Model Loader, ComfyUI-TorchCompileSpeed boosts the performance of PyTorch models without altering the original source code. It achieves this by improving the cache hit rate and optimizing the compilation process, making it an essential tool for artists looking to streamline their workflow and reduce waiting times.

How ComfyUI-TorchCompileSpeed Works

At its core, ComfyUI-TorchCompileSpeed leverages advanced compilation techniques to enhance the execution speed of AI models. It uses a method called torch.compile, which is a part of the PyTorch library, to optimize the model's execution. Think of it as a way to pre-process your model so that it runs more efficiently, much like how a chef might prepare ingredients in advance to cook a meal faster.

The extension employs a strategy known as "inductor" mode, which, combined with dynamic compilation, allows for better adaptability to different model shapes and configurations. This means that once a model is compiled, it can be reused more effectively, saving time on subsequent runs. Additionally, by disabling CUDA Graphs, the extension reduces the overhead associated with capturing execution graphs, further speeding up the process.

ComfyUI-TorchCompileSpeed Features

  • Speed Mode: This is the recommended setting for most users. It uses inductor mode with maximum autotuning and disables CUDA Graphs to minimize overhead. This mode is ideal for achieving the fastest possible execution times.
  • Smarter Reuse: The extension includes an option to reuse compiled results for the same model and configuration, reducing the need for recompilation and saving time.
  • Experimental PTX Assist: This feature helps in pre-warming the PTX/kernel cache, which can significantly reduce the time taken for the first run of a model. It includes options for fast-math operations and setting a cache directory for cross-session reuse.
  • Integration with WanVideo Cython Model Loader: The extension is designed to work seamlessly with this loader, allowing for easy integration and improved performance without modifying the original loader code.

ComfyUI-TorchCompileSpeed Models

The extension does not introduce new models but rather optimizes the compilation and execution of existing PyTorch models. It provides various settings that can be adjusted to suit different needs, such as enabling or disabling dynamic compilation, setting cache size limits, and choosing between different compilation modes.

What's New with ComfyUI-TorchCompileSpeed

Version 1.1.0

  • Introduced experimental PTX assist, fast-math options, and warmup runs to improve cache performance.
  • Added controls for reusing compiled results and compiling only transformer blocks.
  • Maintained compatibility with the WanVideo Cython Model Loader.

Version 1.0.0

  • Launched with core features including speed mode and integration with torch.compile.

Troubleshooting ComfyUI-TorchCompileSpeed

Here are some common issues you might encounter and how to resolve them:

  • Slow First Run: The initial run includes compilation and autotuning, which can take time. Subsequent runs should be significantly faster.
  • Missing Triton Operations: If triton.ops is unavailable, the extension will fall back to using torch.compile for warmup, ensuring that the PTX/kernel cache is still generated.
  • Connection Issues with WanVideo Loader: Ensure that the output type from the settings is set to WANCOMPILEARGS, as this is required for compatibility.
  • Out of Memory (OOM) Errors: Consider lowering the dynamo_cache_size_limit or keeping compile_transformer_blocks_only set to True to manage memory usage better.

Learn More about ComfyUI-TorchCompileSpeed

To further explore the capabilities of ComfyUI-TorchCompileSpeed, you can refer to the following resources:

  • PyTorch Documentation (https://pytorch.org/docs/stable/index.html): For a deeper understanding of the torch.compile function and other related features.
  • Community Forums: Engage with other AI artists and developers to share experiences and solutions.
  • Tutorials and Guides: Look for online tutorials that demonstrate how to integrate and use ComfyUI-TorchCompileSpeed effectively in your projects. By utilizing these resources, you can maximize the benefits of ComfyUI-TorchCompileSpeed and enhance your AI art projects with improved performance and efficiency.

ComfyUI-TorchCompileSpeed Related Nodes

RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.