ComfyUI > Nodes > ComfyUI-DistorchMemoryManager > LayerUtility: Purge VRAM V2

ComfyUI Node: LayerUtility: Purge VRAM V2

Class Name

DisTorchPurgeVRAMV2

Category
DisTorch/Memory
Author
ussoewwin (Account age: 1026days)
Extension
ComfyUI-DistorchMemoryManager
Latest Updated
2026-03-28
Github Stars
0.03K

How to Install ComfyUI-DistorchMemoryManager

Install this extension via the ComfyUI Manager by searching for ComfyUI-DistorchMemoryManager
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-DistorchMemoryManager in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

LayerUtility: Purge VRAM V2 Description

Efficiently manages GPU VRAM by purging unused data to optimize AI model performance.

LayerUtility: Purge VRAM V2:

The DisTorchPurgeVRAMV2 node is designed to efficiently manage and optimize the usage of GPU memory, specifically targeting the VRAM (Video Random Access Memory) on CUDA-enabled devices. Its primary function is to clear or "purge" unused or unnecessary data from the VRAM, which can help prevent memory overflow issues and improve the performance of AI models running on the GPU. This node is particularly beneficial in scenarios where multiple models or large datasets are being processed, as it ensures that the VRAM is utilized effectively by freeing up space that is no longer needed. By doing so, it helps maintain the stability and efficiency of the system, allowing for smoother and faster execution of tasks. The node achieves this by leveraging PyTorch's capabilities to manage CUDA memory, including emptying caches and collecting inter-process communication (IPC) resources, ensuring that the GPU memory is kept in an optimal state.

LayerUtility: Purge VRAM V2 Input Parameters:

anything

This parameter serves as a placeholder for any input data that might be passed to the node. It does not directly affect the purging process but is included to maintain compatibility with other nodes or systems that might require an input parameter. There are no specific constraints or default values associated with this parameter.

purge_cache

This boolean parameter determines whether the node should clear the cache memory on the GPU. When set to True, the node will invoke garbage collection and clear the CUDA cache, freeing up memory that is no longer in use. This can be particularly useful in preventing memory leaks and ensuring that the GPU has sufficient resources for new tasks. The default value is typically False, meaning the cache is not purged unless explicitly requested.

purge_models

This parameter indicates whether the node should purge loaded models from memory. When enabled, it helps free up VRAM by unloading models that are not currently needed, thus optimizing memory usage. This is especially useful in environments where multiple models are loaded and unloaded frequently. The default setting is False.

purge_seedvr2_models

This parameter specifies whether to purge SeedVR2 models from the VRAM. If set to True, any SeedVR2 models that are not actively being used will be removed from memory, helping to conserve VRAM resources. This is beneficial in scenarios where these models are loaded temporarily and can be safely discarded when not in use. The default value is False.

purge_qwen3vl_models

Similar to the purge_seedvr2_models parameter, this option allows for the purging of Qwen3-VL models from the VRAM. Enabling this parameter helps manage memory usage by removing these specific models when they are no longer needed, thus freeing up space for other processes. The default setting is False.

purge_nunchaku_models

This parameter controls whether Nunchaku models should be purged from the VRAM. By setting this to True, you can ensure that any Nunchaku models that are not currently required are removed from memory, optimizing the use of VRAM. This is particularly useful in dynamic environments where models are frequently loaded and unloaded. The default value is False.

LayerUtility: Purge VRAM V2 Output Parameters:

The DisTorchPurgeVRAMV2 node does not explicitly define output parameters in the provided context. However, its primary function is to manage and optimize VRAM usage, which indirectly results in improved system performance and stability. The effects of the node's operation can be observed in the form of reduced memory usage and enhanced efficiency of GPU-based tasks.

LayerUtility: Purge VRAM V2 Usage Tips:

  • To maximize VRAM efficiency, enable purge_cache when running multiple models or large datasets to prevent memory overflow issues.
  • Use purge_models, purge_seedvr2_models, purge_qwen3vl_models, and purge_nunchaku_models selectively based on the models you are working with to ensure that only necessary models remain in memory.
  • Regularly monitor VRAM usage to determine when purging is necessary, especially in environments with limited GPU resources.

LayerUtility: Purge VRAM V2 Common Errors and Solutions:

CUDA out of memory

  • Explanation: This error occurs when the GPU runs out of available memory to allocate for new tasks or models.
  • Solution: Enable purge_cache and relevant model purging parameters to free up VRAM before loading new models or datasets.

Device-side assert triggered

  • Explanation: This error can happen if there is an inconsistency in memory management, such as accessing memory that has been freed.
  • Solution: Ensure that purging operations are performed only when models are not actively in use, and verify that all necessary data is loaded correctly before purging.

RuntimeError: CUDA error: an illegal memory access was encountered

  • Explanation: This error indicates that the program attempted to access memory that it should not have, possibly due to improper memory management.
  • Solution: Double-check the sequence of operations to ensure that purging is done safely and that all necessary data is properly synchronized before and after purging.

LayerUtility: Purge VRAM V2 Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-DistorchMemoryManager
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

LayerUtility: Purge VRAM V2