Set Reserved VRAM(GB) ⚙️:
The ReservedVRAMSetter node is designed to manage and optimize the allocation of VRAM (Video Random Access Memory) for your GPU, ensuring efficient resource usage during AI art generation tasks. This node allows you to set aside a specific amount of VRAM, either manually or automatically, to prevent over-allocation and potential crashes due to insufficient memory. By providing a mechanism to reserve VRAM, it helps maintain system stability and performance, especially when working with large models or complex computations. The node can operate in two modes: manual, where you specify the exact amount of VRAM to reserve, and auto, where it dynamically calculates the optimal VRAM reservation based on current usage and predefined limits. This flexibility makes it a valuable tool for artists looking to balance performance and resource management in their creative workflows.
Set Reserved VRAM(GB) ⚙️ Input Parameters:
reserved
This parameter specifies the amount of VRAM to reserve in gigabytes (GB). In manual mode, it directly sets the reserved VRAM, while in auto mode, it serves as a base value for calculations. The minimum value is 0, and there is no explicit maximum, but it should not exceed the total available VRAM. The default value is 0.6 GB.
mode
This parameter determines the mode of operation for VRAM reservation. It can be set to either "manual" or "auto". In manual mode, the reserved parameter is used directly, while in auto mode, the node calculates the optimal VRAM reservation based on current usage and system constraints. The default mode is "auto".
seed
This integer parameter is used to generate a random seed for operations that require randomness. It ensures reproducibility of results. The default value is 0, with a range from -1 to 1125899906842624. A seed of -1 triggers the generation of a new random seed.
auto_max_reserved
This parameter sets the maximum limit for VRAM reservation in auto mode, expressed in gigabytes (GB). A value of 0 indicates no limit. It helps prevent excessive VRAM reservation that could impact system performance. The default value is 0.0 GB.
clean_gpu_before
This boolean parameter determines whether to perform a GPU memory cleanup before setting the reserved VRAM. Enabling this option can help free up memory and improve performance. The default value is True.
anything
This optional parameter can accept any type of input, serving as a placeholder for additional data or configurations that might be needed for specific use cases.
Set Reserved VRAM(GB) ⚙️ Output Parameters:
output
This parameter represents the primary output of the node, which can be any type of data. It is influenced by the input parameters and the node's internal logic, providing a flexible output that can be used in various contexts.
SEED
This integer output provides the seed value used during the node's execution. It is crucial for ensuring the reproducibility of results, especially when randomness is involved in the process.
Reserved(GB)
This float output indicates the final amount of VRAM reserved in gigabytes (GB). It reflects the actual reservation made by the node, based on the input parameters and the selected mode of operation.
Set Reserved VRAM(GB) ⚙️ Usage Tips:
- Use the "auto" mode for dynamic VRAM management, especially when working with varying workloads or when unsure about the exact VRAM requirements.
- Enable
clean_gpu_beforeto ensure maximum available VRAM before starting intensive tasks, which can help prevent memory-related issues.
Set Reserved VRAM(GB) ⚙️ Common Errors and Solutions:
[ReservedVRAM]获取GPU信息出错(NVML)
- Explanation: This error occurs when there is an issue retrieving GPU information using the NVML library, possibly due to a missing or improperly configured NVML installation.
- Solution: Ensure that the NVML library is correctly installed and configured on your system. Check for any installation errors or compatibility issues.
[ReservedVRAM]获取GPU信息出错(torch)
- Explanation: This error indicates a problem accessing GPU information via the PyTorch library, which might be due to an outdated or incompatible PyTorch version.
- Solution: Verify that you have the latest version of PyTorch installed and that it is compatible with your GPU and CUDA version. Reinstall or update PyTorch if necessary.
