ComfyUI > Nodes > Comfy_HunyuanImage3 > Hunyuan 3 GPU Info

ComfyUI Node: Hunyuan 3 GPU Info

Class Name

HunyuanImage3GPUInfo

Category
HunyuanImage3
Author
EricRollei (Account age: 1544days)
Extension
Comfy_HunyuanImage3
Latest Updated
2026-02-21
Github Stars
0.05K

How to Install Comfy_HunyuanImage3

Install this extension via the ComfyUI Manager by searching for Comfy_HunyuanImage3
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfy_HunyuanImage3 in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Hunyuan 3 GPU Info Description

Diagnostic node providing detailed GPU and CUDA info for optimizing AI art workflows.

Hunyuan 3 GPU Info:

HunyuanImage3GPUInfo is a diagnostic node designed to provide detailed information about the GPU and CUDA environment in your system. This node is particularly useful for AI artists who need to understand the capabilities and current status of their GPU hardware to optimize their workflows. By offering insights into the GPU's compute capability, memory usage, and other critical specifications, this node helps you make informed decisions about resource allocation and performance tuning. It serves as a valuable tool for diagnosing potential issues related to GPU memory and processing power, ensuring that your creative projects run smoothly and efficiently.

Hunyuan 3 GPU Info Input Parameters:

model_name

This parameter allows you to select the model you wish to use from the available options provided by the HunyuanImage3FullLoader. It is crucial for determining which model's GPU requirements and compatibility will be assessed. The selection of a model can impact the node's execution by influencing the GPU resources needed.

primary_gpu

This integer parameter specifies the index of the primary GPU to be used, with a default value of 0. It allows you to choose which GPU will be the main focus for diagnostics, especially in multi-GPU setups. The minimum value is 0, and the maximum value is determined by the number of GPUs available minus one.

reserve_memory_gb

This float parameter defines the amount of GPU memory, in gigabytes, to reserve for inference tasks. It has a default value of 12.0 GB, with a minimum of 2.0 GB and a maximum of 32.0 GB, adjustable in 0.5 GB increments. This setting helps ensure that sufficient memory is available for model inference, preventing out-of-memory errors during execution.

exclude_gpus

This string parameter allows you to specify GPUs to exclude from diagnostics by listing their indices, separated by commas (e.g., "1, 3"). This is useful for focusing diagnostics on specific GPUs or avoiding those that are not relevant to your current task.

info

This string parameter provides a summary of the detected GPUs and their specifications. It is automatically populated with information about each GPU, such as its name and total memory, or indicates if no CUDA GPUs are detected. This parameter is primarily for informational purposes and does not affect the node's execution.

Hunyuan 3 GPU Info Output Parameters:

gpu_info

This output parameter returns a string containing detailed information about each detected GPU, including its name, compute capability, total memory, used memory, free memory, and multi-processor count. This information is crucial for understanding the current state of your GPU resources and making informed decisions about resource allocation and optimization.

Hunyuan 3 GPU Info Usage Tips:

  • Ensure that your CUDA environment is correctly set up and that all GPUs are properly recognized by the system before using this node.
  • Use the exclude_gpus parameter to focus diagnostics on specific GPUs, especially if you have a multi-GPU setup and want to avoid unnecessary information.
  • Regularly check the gpu_info output to monitor GPU memory usage and adjust your project's settings accordingly to prevent out-of-memory errors.

Hunyuan 3 GPU Info Common Errors and Solutions:

GPU Out of Memory! Try:

  • Explanation: This error occurs when the GPU does not have enough memory to execute the current task, which can happen if the resolution is too high or if too many resources are being used simultaneously.
  • Solution: Set offload_mode to 'always', use a smaller resolution, reduce guidance_scale or steps, or clear GPU memory with the Unload node first.

Hunyuan 3 GPU Info Related Nodes

Go back to the extension to check out more related nodes.
Comfy_HunyuanImage3
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Hunyuan 3 GPU Info