ComfyUI > Nodes > comfyui-purgevram > PurgeVRAM

ComfyUI Node: PurgeVRAM

Class Name

PurgeVRAM

Category
utils
Author
T8mars (Account age: 1562days)
Extension
comfyui-purgevram
Latest Updated
2026-01-20
Github Stars
0.09K

How to Install comfyui-purgevram

Install this extension via the ComfyUI Manager by searching for comfyui-purgevram
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter comfyui-purgevram in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

PurgeVRAM Description

PurgeVRAM optimizes VRAM usage by offloading unused data to prevent OOM errors and enhance GPU performance.

PurgeVRAM:

PurgeVRAM is a node designed to manage and optimize the usage of VRAM (Video Random Access Memory) in environments where GPU resources are limited or need to be efficiently allocated. Its primary purpose is to free up VRAM by offloading models and data that are not currently in use, thereby preventing out-of-memory (OOM) errors and ensuring smoother operation of GPU-intensive tasks. This node is particularly beneficial in scenarios where multiple models or large datasets are being processed simultaneously, as it helps maintain system stability and performance by dynamically managing memory resources. By intelligently deciding which data to offload based on usage patterns and memory requirements, PurgeVRAM enhances the overall efficiency of GPU operations, making it an essential tool for AI artists and developers working with complex models and workflows.

PurgeVRAM Input Parameters:

--highvram

This parameter, when enabled, keeps models in GPU memory after use, rather than unloading them to CPU memory. This can be beneficial for tasks requiring frequent access to the same models, as it reduces the overhead of reloading models into GPU memory. However, it may increase VRAM usage, potentially leading to OOM errors if not managed carefully. There are no specific minimum or maximum values, as it is a boolean flag.

--normalvram

This parameter forces the use of normal VRAM settings, even if low VRAM settings are automatically enabled. It is useful for maintaining a balance between performance and memory usage, ensuring that models are loaded in a way that optimizes both speed and resource allocation. Like --highvram, it is a boolean flag without specific value ranges.

--lowvram

When enabled, this parameter splits the UNet model into parts to reduce VRAM usage. This is particularly useful for systems with limited VRAM, as it allows for the processing of large models without exceeding memory limits. It is a boolean flag and does not have specific value ranges.

--novram

This parameter is used when even low VRAM settings are insufficient. It further reduces VRAM usage by employing more aggressive memory management techniques, potentially offloading more data to CPU memory. It is a boolean flag and does not have specific value ranges.

--reserve-vram

This parameter allows you to set the amount of VRAM in GB to reserve for use by your operating system or other software. By default, a certain amount is reserved based on your OS, but this parameter provides more control over VRAM allocation. The value can be set as a float, with no specific minimum or maximum, but it should be chosen based on the available VRAM and system requirements.

PurgeVRAM Output Parameters:

VRAM Usage Status

The output of the PurgeVRAM node typically includes a status report on VRAM usage, indicating how much memory has been freed and how much remains in use. This information is crucial for understanding the effectiveness of the VRAM management strategies employed and for making informed decisions about further memory optimization.

PurgeVRAM Usage Tips:

  • Use --highvram when you have sufficient VRAM and need to frequently access the same models, as it reduces the overhead of reloading models.
  • Enable --lowvram on systems with limited VRAM to ensure that large models can be processed without exceeding memory limits.
  • Adjust --reserve-vram based on your system's VRAM capacity and the requirements of other running applications to prevent OOM errors.

PurgeVRAM Common Errors and Solutions:

Out of Memory Error

  • Explanation: This error occurs when the VRAM is fully utilized, and there is no space left to load additional models or data.
  • Solution: Enable --lowvram or --novram to reduce VRAM usage, or increase the --reserve-vram value to ensure sufficient memory is available for critical operations.

Model Loading Failure

  • Explanation: This error can happen if the VRAM management settings are too restrictive, preventing models from being loaded into memory.
  • Solution: Adjust the VRAM settings by disabling --lowvram or --novram if possible, or increase the VRAM allocation by reducing the --reserve-vram value.

PurgeVRAM Related Nodes

Go back to the extension to check out more related nodes.
comfyui-purgevram
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.