ComfyUI > Nodes > ComfyUI-ParallelAnything > Parallel Device Config

ComfyUI Node: Parallel Device Config

Class Name

ParallelDevice

Category
utils/hardware
Author
FearL0rd (Account age: 3475days)
Extension
ComfyUI-ParallelAnything
Latest Updated
2026-02-04
Github Stars
0.03K

How to Install ComfyUI-ParallelAnything

Install this extension via the ComfyUI Manager by searching for ComfyUI-ParallelAnything
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-ParallelAnything in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Parallel Device Config Description

Distributes workloads across multiple devices to optimize AI model performance and efficiency.

Parallel Device Config:

The ParallelDevice node is designed to facilitate the distribution of computational workloads across multiple devices, such as GPUs, in a parallel processing environment. This node is particularly beneficial for AI artists and developers who work with large models or datasets that require significant computational power. By leveraging multiple devices, the ParallelDevice node aims to optimize performance and reduce processing time, making it an essential tool for tasks that demand high computational resources. The node ensures that workloads are efficiently split and managed across the available devices, allowing for seamless parallel execution. This capability is crucial for enhancing the speed and efficiency of AI model training and inference, ultimately enabling more complex and resource-intensive projects to be executed with ease.

Parallel Device Config Input Parameters:

model

The model parameter represents the AI model that you wish to execute in parallel across multiple devices. This parameter is crucial as it determines the specific model that will be distributed and processed. The model should be compatible with the parallel processing setup, and it is essential to ensure that it is properly configured for multi-device execution. There are no specific minimum or maximum values for this parameter, but it should be a valid model object that can be processed by the node.

device_chain

The device_chain parameter is a list of devices that will be used for parallel processing. Each entry in the list specifies a device and the percentage of the workload it should handle. This parameter is vital for defining how the computational load is distributed across the available devices. The sum of the percentages should be greater than zero to ensure that the workload is effectively split. The devices can include various types, such as GPUs or CPUs, depending on the available hardware.

workload_split

The workload_split parameter is a boolean that determines whether the workload should be split across the devices. When set to True, the node will distribute the workload based on the specified percentages in the device_chain. This parameter is important for optimizing the use of available resources and ensuring that each device is utilized according to its capacity. The default value is True.

auto_vram_balance

The auto_vram_balance parameter is a boolean that, when enabled, automatically balances the VRAM usage across the devices. This feature is useful for preventing any single device from becoming a bottleneck due to VRAM limitations. By balancing VRAM usage, the node can enhance performance and ensure smoother execution. The default value is False.

purge_cache

The purge_cache parameter is a boolean that indicates whether the cache should be purged before setting up the parallel execution. Purging the cache can help free up memory and improve performance, especially when dealing with large models or datasets. The default value is True.

purge_models

The purge_models parameter is a boolean that specifies whether models should be purged from memory before setting up the parallel execution. This can be beneficial for freeing up resources and ensuring that the system is optimized for the new workload. The default value is False.

Parallel Device Config Output Parameters:

model

The model output parameter represents the AI model after it has been configured for parallel execution. This output is crucial as it indicates that the model is now ready to be processed across the specified devices. The model will have been adjusted to accommodate the parallel setup, ensuring that it can efficiently utilize the available computational resources.

Parallel Device Config Usage Tips:

  • Ensure that the device_chain is correctly configured with valid devices and appropriate workload percentages to optimize performance.
  • Consider enabling auto_vram_balance if you encounter VRAM limitations, as this can help distribute memory usage more evenly across devices.
  • Regularly purge the cache and models if you experience performance issues, as this can free up resources and improve execution speed.

Parallel Device Config Common Errors and Solutions:

Invalid device: <device_name>

  • Explanation: This error occurs when a specified device in the device_chain is not recognized or is invalid.
  • Solution: Verify that all device names in the device_chain are correct and correspond to available hardware on your system.

Total percentage is zero or negative

  • Explanation: This error indicates that the sum of the workload percentages in the device_chain is zero or negative, which is not allowed.
  • Solution: Ensure that the percentages in the device_chain add up to a positive value to enable proper workload distribution.

Model is None or device_chain is empty

  • Explanation: This error occurs when the model parameter is not provided or the device_chain is empty, preventing parallel execution.
  • Solution: Provide a valid model and ensure that the device_chain contains at least one device with a positive workload percentage.

Parallel Device Config Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-ParallelAnything
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.