Hunyuan 3 GPU Info:
HunyuanImage3GPUInfo is a diagnostic node designed to provide detailed information about the GPU and CUDA environment in your system. This node is particularly useful for AI artists who need to understand the capabilities and current status of their GPU hardware to optimize their workflows. By offering insights into the GPU's compute capability, memory usage, and other critical specifications, this node helps you make informed decisions about resource allocation and performance tuning. It serves as a valuable tool for diagnosing potential issues related to GPU memory and processing power, ensuring that your creative projects run smoothly and efficiently.
Hunyuan 3 GPU Info Input Parameters:
model_name
This parameter allows you to select the model you wish to use from the available options provided by the HunyuanImage3FullLoader. It is crucial for determining which model's GPU requirements and compatibility will be assessed. The selection of a model can impact the node's execution by influencing the GPU resources needed.
primary_gpu
This integer parameter specifies the index of the primary GPU to be used, with a default value of 0. It allows you to choose which GPU will be the main focus for diagnostics, especially in multi-GPU setups. The minimum value is 0, and the maximum value is determined by the number of GPUs available minus one.
reserve_memory_gb
This float parameter defines the amount of GPU memory, in gigabytes, to reserve for inference tasks. It has a default value of 12.0 GB, with a minimum of 2.0 GB and a maximum of 32.0 GB, adjustable in 0.5 GB increments. This setting helps ensure that sufficient memory is available for model inference, preventing out-of-memory errors during execution.
exclude_gpus
This string parameter allows you to specify GPUs to exclude from diagnostics by listing their indices, separated by commas (e.g., "1, 3"). This is useful for focusing diagnostics on specific GPUs or avoiding those that are not relevant to your current task.
info
This string parameter provides a summary of the detected GPUs and their specifications. It is automatically populated with information about each GPU, such as its name and total memory, or indicates if no CUDA GPUs are detected. This parameter is primarily for informational purposes and does not affect the node's execution.
Hunyuan 3 GPU Info Output Parameters:
gpu_info
This output parameter returns a string containing detailed information about each detected GPU, including its name, compute capability, total memory, used memory, free memory, and multi-processor count. This information is crucial for understanding the current state of your GPU resources and making informed decisions about resource allocation and optimization.
Hunyuan 3 GPU Info Usage Tips:
- Ensure that your CUDA environment is correctly set up and that all GPUs are properly recognized by the system before using this node.
- Use the
exclude_gpusparameter to focus diagnostics on specific GPUs, especially if you have a multi-GPU setup and want to avoid unnecessary information. - Regularly check the
gpu_infooutput to monitor GPU memory usage and adjust your project's settings accordingly to prevent out-of-memory errors.
Hunyuan 3 GPU Info Common Errors and Solutions:
GPU Out of Memory! Try:
- Explanation: This error occurs when the GPU does not have enough memory to execute the current task, which can happen if the resolution is too high or if too many resources are being used simultaneously.
- Solution: Set
offload_modeto 'always', use a smaller resolution, reduceguidance_scaleor steps, or clear GPU memory with the Unload node first.
