Visit ComfyUI Online for ready-to-use ComfyUI environment
ComfyUI-MultiGPU enhances ComfyUI by enabling CUDA device selection for loader nodes, allowing model components like UNet, Clip, or VAE to be assigned to specific GPUs. It supports multi-GPU workflows for SDXL, FLUX, LTXVideo, and Hunyuan Video.
ComfyUI-MultiGPU is an innovative extension designed to optimize the use of your computer's graphics processing units (GPUs) and central processing unit (CPU) when working with AI models. This extension is particularly beneficial for AI artists who work with complex models that require significant computational resources. By intelligently managing memory and distributing workloads across multiple GPUs or between a GPU and the CPU, ComfyUI-MultiGPU helps free up your primary GPU's VRAM (Video Random Access Memory). This allows you to maximize the available resources for the actual computation tasks that matter most, such as processing in the latent space of AI models.
At its core, ComfyUI-MultiGPU enhances memory management rather than parallel processing. This means that while the steps in your workflow still execute one after the other, the extension allows different components of your models to be loaded onto different devices. For example, parts of a model can be offloaded to system RAM or a secondary GPU, freeing up your main GPU for more intensive tasks. This is particularly useful when working with large models that might otherwise exceed the VRAM capacity of a single GPU.
Imagine your computer as a kitchen where cooking (computation) happens. If your main GPU is the chef, ComfyUI-MultiGPU acts like a smart kitchen assistant, ensuring that the chef has enough space and resources to work efficiently by moving ingredients (model components) to different parts of the kitchen (other GPUs or RAM) as needed.
ComfyUI-MultiGPU supports a variety of models, including GGUF-quantized models, which are optimized for reduced VRAM usage. This makes it possible to run complex models on systems with limited resources. The extension automatically creates MultiGPU versions of loader nodes, allowing you to specify which GPU to use for each model component.
The latest update, DisTorch 2.0, introduces a simplified Virtual VRAM control system. This new feature allows you to offload model layers from your GPU with minimal configuration. You simply set the amount of VRAM you want to free up, and DisTorch takes care of the rest. This update makes it easier than ever to manage your system's resources and run larger models efficiently.
If you encounter issues while using ComfyUI-MultiGPU, here are some common problems and solutions:
To further explore the capabilities of ComfyUI-MultiGPU, you can access additional resources such as tutorials and community forums. These platforms provide valuable insights and support from other AI artists and developers who use the extension. Engaging with the community can help you discover new ways to optimize your workflows and make the most of your computational resources.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.