LayerUtility: Purge VRAM V2:
The DisTorchPurgeVRAMV2 node is designed to efficiently manage and optimize the usage of GPU memory, specifically targeting the VRAM (Video Random Access Memory) on CUDA-enabled devices. Its primary function is to clear or "purge" unused or unnecessary data from the VRAM, which can help prevent memory overflow issues and improve the performance of AI models running on the GPU. This node is particularly beneficial in scenarios where multiple models or large datasets are being processed, as it ensures that the VRAM is utilized effectively by freeing up space that is no longer needed. By doing so, it helps maintain the stability and efficiency of the system, allowing for smoother and faster execution of tasks. The node achieves this by leveraging PyTorch's capabilities to manage CUDA memory, including emptying caches and collecting inter-process communication (IPC) resources, ensuring that the GPU memory is kept in an optimal state.
LayerUtility: Purge VRAM V2 Input Parameters:
anything
This parameter serves as a placeholder for any input data that might be passed to the node. It does not directly affect the purging process but is included to maintain compatibility with other nodes or systems that might require an input parameter. There are no specific constraints or default values associated with this parameter.
purge_cache
This boolean parameter determines whether the node should clear the cache memory on the GPU. When set to True, the node will invoke garbage collection and clear the CUDA cache, freeing up memory that is no longer in use. This can be particularly useful in preventing memory leaks and ensuring that the GPU has sufficient resources for new tasks. The default value is typically False, meaning the cache is not purged unless explicitly requested.
purge_models
This parameter indicates whether the node should purge loaded models from memory. When enabled, it helps free up VRAM by unloading models that are not currently needed, thus optimizing memory usage. This is especially useful in environments where multiple models are loaded and unloaded frequently. The default setting is False.
purge_seedvr2_models
This parameter specifies whether to purge SeedVR2 models from the VRAM. If set to True, any SeedVR2 models that are not actively being used will be removed from memory, helping to conserve VRAM resources. This is beneficial in scenarios where these models are loaded temporarily and can be safely discarded when not in use. The default value is False.
purge_qwen3vl_models
Similar to the purge_seedvr2_models parameter, this option allows for the purging of Qwen3-VL models from the VRAM. Enabling this parameter helps manage memory usage by removing these specific models when they are no longer needed, thus freeing up space for other processes. The default setting is False.
purge_nunchaku_models
This parameter controls whether Nunchaku models should be purged from the VRAM. By setting this to True, you can ensure that any Nunchaku models that are not currently required are removed from memory, optimizing the use of VRAM. This is particularly useful in dynamic environments where models are frequently loaded and unloaded. The default value is False.
LayerUtility: Purge VRAM V2 Output Parameters:
The DisTorchPurgeVRAMV2 node does not explicitly define output parameters in the provided context. However, its primary function is to manage and optimize VRAM usage, which indirectly results in improved system performance and stability. The effects of the node's operation can be observed in the form of reduced memory usage and enhanced efficiency of GPU-based tasks.
LayerUtility: Purge VRAM V2 Usage Tips:
- To maximize VRAM efficiency, enable
purge_cachewhen running multiple models or large datasets to prevent memory overflow issues. - Use
purge_models,purge_seedvr2_models,purge_qwen3vl_models, andpurge_nunchaku_modelsselectively based on the models you are working with to ensure that only necessary models remain in memory. - Regularly monitor VRAM usage to determine when purging is necessary, especially in environments with limited GPU resources.
LayerUtility: Purge VRAM V2 Common Errors and Solutions:
CUDA out of memory
- Explanation: This error occurs when the GPU runs out of available memory to allocate for new tasks or models.
- Solution: Enable
purge_cacheand relevant model purging parameters to free up VRAM before loading new models or datasets.
Device-side assert triggered
- Explanation: This error can happen if there is an inconsistency in memory management, such as accessing memory that has been freed.
- Solution: Ensure that purging operations are performed only when models are not actively in use, and verify that all necessary data is loaded correctly before purging.
RuntimeError: CUDA error: an illegal memory access was encountered
- Explanation: This error indicates that the program attempted to access memory that it should not have, possibly due to improper memory management.
- Solution: Double-check the sequence of operations to ensure that purging is done safely and that all necessary data is properly synchronized before and after purging.
