ComfyUI > Nodes > ComfyUI-QuantOps > Load Diffusion Model (Quantized)

ComfyUI Node: Load Diffusion Model (Quantized)

Class Name

QuantizedUNETLoader

Category
loaders/quantized
Author
silveroxides (Account age: 0days)
Extension
ComfyUI-QuantOps
Latest Updated
2026-03-22
Github Stars
0.04K

How to Install ComfyUI-QuantOps

Install this extension via the ComfyUI Manager by searching for ComfyUI-QuantOps
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-QuantOps in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Load Diffusion Model (Quantized) Description

Efficiently loads quantized UNET models in ComfyUI, optimizing performance for constrained hardware.

Load Diffusion Model (Quantized):

The QuantizedUNETLoader is a specialized node designed to load quantized UNET models efficiently within the ComfyUI framework. Its primary purpose is to facilitate the loading of diffusion models that have been quantized to reduce their size and improve performance, particularly in environments with limited computational resources. This node supports various quantization formats, including INT8 and FP8, and can automatically detect the appropriate format to use, ensuring optimal loading and execution. By leveraging custom operations tailored to specific quantization types, the QuantizedUNETLoader enhances the model's performance while maintaining accuracy. This capability is particularly beneficial for AI artists and developers who need to deploy complex models on hardware with constraints, such as mobile devices or edge computing platforms. The node's integration with ComfyUI allows for seamless handling of model state dictionaries and metadata, ensuring that the loaded models are ready for immediate use in creative AI applications.

Load Diffusion Model (Quantized) Input Parameters:

unet_name

The unet_name parameter specifies the name of the UNET model to be loaded. It is crucial for identifying the correct model file within the designated directory. This parameter directly impacts which model is retrieved and loaded, influencing the subsequent operations and results. There are no explicit minimum or maximum values, but it must correspond to a valid model name within the system.

quant_format

The quant_format parameter determines the quantization format to be used when loading the model. It can be set to specific formats like int8, float8_e4m3fn, or auto for automatic detection. This parameter affects the choice of custom operations and the model's performance characteristics. The available options include int8_tensorwise, int8_blockwise, float8_e4m3fn_blockwise, float8_e4m3fn_rowwise, mxfp8, and nvfp4.

kernel_backend

The kernel_backend parameter is used to set the backend for kernel operations, particularly relevant for INT8 blockwise formats. It allows you to specify the computational backend, such as triton, which can influence the efficiency and speed of model operations. This parameter is optional and primarily affects performance optimization.

disable_dynamic

The disable_dynamic parameter is a boolean flag that, when set to True, disables dynamic loading features. This can be useful for ensuring consistent model behavior and performance, especially in environments where dynamic loading might introduce variability. The default value is typically False, allowing dynamic features unless explicitly disabled.

Load Diffusion Model (Quantized) Output Parameters:

model

The model output parameter represents the loaded UNET model, ready for use in diffusion processes. This output is crucial as it encapsulates the entire model architecture and weights, allowing for immediate deployment in AI applications. The model's successful loading and configuration are essential for achieving the desired performance and accuracy in tasks such as image generation or enhancement.

Load Diffusion Model (Quantized) Usage Tips:

  • Utilize the quant_format set to auto to let the node automatically detect and apply the most suitable quantization format, ensuring optimal performance without manual intervention.
  • When working with INT8 models, consider specifying the kernel_backend to triton for potentially improved computational efficiency, especially on compatible hardware.

Load Diffusion Model (Quantized) Common Errors and Solutions:

Load Diffusion Model (Quantized): Format detection failed

  • Explanation: This error occurs when the node is unable to automatically detect the quantization format of the model file.
  • Solution: Ensure that the model file is correctly formatted and accessible. Verify that the necessary dependencies for format detection are installed and correctly configured.

HybridINT8Ops not available

  • Explanation: This error indicates that the required operations for INT8 quantization are not available, possibly due to missing dependencies.
  • Solution: Check that all necessary libraries and modules for INT8 operations are installed. Reinstall or update the relevant packages if needed.

Failed to configure Triton backend

  • Explanation: This error suggests an issue with setting the Triton backend for kernel operations, which may be due to compatibility or installation problems.
  • Solution: Verify that the Triton library is installed and compatible with your system. Consider using an alternative backend if Triton is not supported.

Load Diffusion Model (Quantized) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-QuantOps
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Load Diffusion Model (Quantized)