Parallel Device List (1-4x):
The ParallelDeviceList node is designed to facilitate the management and utilization of multiple computing devices for parallel processing tasks. This node is particularly beneficial for AI artists and developers who aim to leverage the power of multiple GPUs or other processing units to enhance the performance and efficiency of their computational workflows. By providing a comprehensive list of available devices, including CPUs, GPUs, and other specialized hardware, the ParallelDeviceList node enables users to distribute workloads across multiple devices, thereby optimizing resource usage and reducing processing time. This node is essential for tasks that require high computational power, such as rendering complex AI models or processing large datasets, as it allows for true multi-device parallelism, ensuring that each device is utilized to its fullest potential.
Parallel Device List (1-4x) Input Parameters:
device_chain
The device_chain parameter is a list of devices that you wish to utilize for parallel processing. Each entry in the list should specify a device and the percentage of the workload it should handle. This parameter is crucial for defining how the workload is distributed across the available devices, allowing for fine-tuned control over resource allocation. The total percentage across all devices should sum up to 100%, ensuring that the entire workload is accounted for. This parameter does not have a predefined minimum or maximum value, but it should be configured to match the capabilities and availability of your hardware setup.
workload_split
The workload_split parameter is a boolean flag that determines whether the workload should be split across multiple devices. When set to True, the node will attempt to distribute the workload based on the specified device_chain and their respective percentages. This parameter is essential for enabling parallel processing, as it dictates whether the node should engage multiple devices or fallback to a single device if the workload is too small. The default value is True, allowing for automatic workload distribution unless explicitly disabled.
auto_vram_balance
The auto_vram_balance parameter is a boolean flag that, when enabled, allows the node to automatically balance the workload based on the available VRAM of each device. This feature is particularly useful for optimizing performance in environments with heterogeneous device capabilities, as it ensures that each device is assigned a workload proportional to its memory capacity. The default value is False, meaning that VRAM balancing is not performed unless specifically requested.
purge_cache
The purge_cache parameter is a boolean flag that indicates whether the node should clear the cache before executing the parallel processing task. This is useful for freeing up memory resources and ensuring that the devices are in a clean state before starting a new task. The default value is True, which helps prevent memory-related issues by clearing any residual data from previous operations.
purge_models
The purge_models parameter is a boolean flag that specifies whether the node should remove any loaded models from memory before executing the task. This is beneficial for managing memory usage, especially in scenarios where multiple models are being used sequentially. The default value is False, allowing models to remain in memory unless explicitly purged.
Parallel Device List (1-4x) Output Parameters:
available_devices
The available_devices output parameter provides a list of all devices that are available for parallel processing. This includes CPUs, GPUs, and any other supported hardware, such as MPS or XPU devices. The list is dynamically generated based on the current system configuration and the availability of each device type. This output is crucial for understanding the resources at your disposal and for planning how to distribute workloads effectively.
Parallel Device List (1-4x) Usage Tips:
- Ensure that your
device_chainis accurately configured to reflect the capabilities and availability of your hardware setup. This will help in achieving optimal performance and resource utilization. - Consider enabling
auto_vram_balanceif you are working with devices that have varying memory capacities. This will allow the node to automatically adjust the workload distribution based on available VRAM, leading to more efficient processing. - Regularly use the
purge_cacheoption to clear any residual data from previous operations, especially when switching between different tasks or models. This can help prevent memory-related issues and ensure smooth execution.
Parallel Device List (1-4x) Common Errors and Solutions:
Invalid device: <device_name>
- Explanation: This error occurs when a specified device in the
device_chainis not recognized or is unavailable. It may be due to incorrect device naming or the device not being properly configured. - Solution: Verify that the device names in your
device_chainare correct and match the available devices listed in theavailable_devicesoutput. Ensure that all necessary drivers and configurations are in place for the devices you intend to use.
Error: <error_message>
- Explanation: This generic error message indicates that an unexpected issue has occurred during the execution of the node. It could be related to device configuration, workload distribution, or memory management.
- Solution: Review the node's input parameters and ensure they are correctly configured. Check for any additional error messages or logs that may provide more context on the issue. If necessary, consult the documentation or seek support for further assistance.
