ComfyUI > Nodes > ComfyUI-tbox > PurgeVRAMNode

ComfyUI Node: PurgeVRAMNode

Class Name

PurgeVRAMNode

Category
tbox/other
Author
ai-shizuka (Account age: 3606days)
Extension
ComfyUI-tbox
Latest Updated
2025-04-22
Github Stars
0.02K

How to Install ComfyUI-tbox

Install this extension via the ComfyUI Manager by searching for ComfyUI-tbox
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-tbox in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

PurgeVRAMNode Description

Optimizes VRAM usage by purging unnecessary data, enhancing AI project performance and stability.

PurgeVRAMNode:

The PurgeVRAMNode is designed to efficiently manage and optimize the usage of VRAM (Video Random Access Memory) in your AI art projects. Its primary purpose is to free up VRAM resources by purging unnecessary data, which can be crucial when working with large models or datasets that demand significant memory. This node is particularly beneficial in scenarios where VRAM is limited, as it helps prevent memory overflow and ensures smoother operation by clearing cached data and unloading models that are not currently in use. By doing so, it enhances the performance and stability of your AI workflows, allowing you to focus on creativity without being hindered by technical constraints.

PurgeVRAMNode Input Parameters:

anything

This parameter acts as a wildcard input, accepting any type of data. It is primarily used to trigger the node's execution without affecting its core functionality. There are no specific minimum, maximum, or default values for this parameter, as it serves as a placeholder to ensure the node can be integrated into various workflows seamlessly.

purge_cache

This boolean parameter determines whether the node should clear the cache memory. When set to True, it instructs the node to free up VRAM by removing cached data that is no longer needed. This can help in reducing memory usage and preventing potential slowdowns. The default value is True, and it can be set to False if you wish to retain cached data for quicker access in subsequent operations.

purge_models

This boolean parameter controls whether the node should unload models from VRAM. Setting it to True will remove models that are not actively being used, freeing up memory for other tasks. This is particularly useful when working with multiple models or when VRAM resources are constrained. The default value is True, allowing for automatic model unloading unless specified otherwise.

PurgeVRAMNode Output Parameters:

None

The PurgeVRAMNode does not produce any direct output parameters. Its function is to manage VRAM usage internally, and as such, it does not return any values or data. The node's impact is observed in the improved performance and reduced memory usage of your AI workflows.

PurgeVRAMNode Usage Tips:

  • Use the purge_cache parameter set to True when you notice a slowdown in your workflow due to excessive cached data. This will help in freeing up memory and improving performance.
  • Enable the purge_models parameter to True when working with multiple models or when VRAM is limited. This will ensure that only the necessary models are loaded, optimizing memory usage.

PurgeVRAMNode Common Errors and Solutions:

"CUDA out of memory"

  • Explanation: This error occurs when the GPU runs out of available VRAM to allocate for new tasks or models.
  • Solution: Ensure that purge_cache and purge_models are set to True to free up as much VRAM as possible. Additionally, consider reducing the size of the models or data being used.

"torch.cuda.is_available() returns False"

  • Explanation: This indicates that the system does not recognize a compatible CUDA-enabled GPU.
  • Solution: Verify that your system has a CUDA-compatible GPU and that the necessary drivers and CUDA toolkit are installed correctly. If using a CPU-only setup, ensure that the --cpu flag is set in your configuration.

PurgeVRAMNode Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-tbox
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.