ComfyUI > Nodes > ComfyUI-nunchaku > Nunchaku Text Encoder Loader

ComfyUI Node: Nunchaku Text Encoder Loader

Class Name

NunchakuTextEncoderLoader

Category
Nunchaku
Author
mit-han-lab (Account age: 2545days)
Extension
ComfyUI-nunchaku
Latest Updated
2025-05-03
Github Stars
0.94K

How to Install ComfyUI-nunchaku

Install this extension via the ComfyUI Manager by searching for ComfyUI-nunchaku
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-nunchaku in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Nunchaku Text Encoder Loader Description

Facilitates loading and configuring text encoders in Nunchaku for AI art projects, supporting T5 models.

Nunchaku Text Encoder Loader:

The NunchakuTextEncoderLoader is a specialized node designed to facilitate the loading and configuration of text encoders within the Nunchaku framework. Its primary purpose is to streamline the integration of text encoding models, particularly those based on the T5 architecture, into your AI art projects. This node is capable of handling both standard and 4-bit quantized T5 models, providing flexibility in model selection and resource management. By leveraging this node, you can efficiently manage text encoding tasks, ensuring that your models are loaded with the appropriate configurations and ready for use in generating or processing text data. The node's design emphasizes ease of use, allowing you to focus on creative aspects while it handles the technical intricacies of model loading and configuration.

Nunchaku Text Encoder Loader Input Parameters:

model_type

The model_type parameter specifies the type of model to be used, with the current supported option being flux. This parameter determines the underlying architecture and behavior of the text encoder, ensuring compatibility with the Nunchaku framework. Selecting the correct model type is crucial for the node's successful operation.

text_encoder1

The text_encoder1 parameter allows you to specify the first text encoder model to be loaded. This parameter is essential for defining the primary model that will process text data. It accepts a list of available text encoder filenames, ensuring that you can select from pre-existing models within your environment.

text_encoder2

Similar to text_encoder1, the text_encoder2 parameter specifies an additional text encoder model. This allows for the use of multiple models in tandem, providing enhanced flexibility and potential for more complex text processing tasks. It also accepts a list of available text encoder filenames.

t5_min_length

The t5_min_length parameter sets the minimum length for the T5 tokenizer, with a default value of 512. It can range from 256 to 1024, adjustable in steps of 128. This parameter influences the minimum number of tokens the tokenizer will consider, impacting the granularity and detail of text processing.

use_4bit_t5

The use_4bit_t5 parameter determines whether a 4-bit quantized T5 model should be used. Options are disable or enable. Enabling this option can significantly reduce memory usage and computational load, making it suitable for environments with limited resources.

int4_model

The int4_model parameter specifies the name of the 4-bit T5 model to be used when use_4bit_t5 is enabled. It provides a list of available model paths, allowing you to select the appropriate quantized model for your needs. This parameter is crucial for ensuring that the correct model is loaded when using 4-bit quantization.

Nunchaku Text Encoder Loader Output Parameters:

CLIP

The CLIP output parameter represents the loaded text encoder model, configured and ready for use. This output is essential for subsequent text processing tasks, as it encapsulates the model's capabilities and settings. The CLIP model can be used for a variety of applications, including text-to-image generation and other AI-driven creative processes.

Nunchaku Text Encoder Loader Usage Tips:

  • Ensure that the model_type is set to flux to maintain compatibility with the Nunchaku framework.
  • When using 4-bit quantized models, verify that the int4_model parameter is correctly set to avoid loading errors.

Nunchaku Text Encoder Loader Common Errors and Solutions:

Please select a 4-bit T5 model.

  • Explanation: This error occurs when use_4bit_t5 is enabled, but no valid 4-bit T5 model is specified in the int4_model parameter.
  • Solution: Ensure that you select a valid model from the int4_model options when enabling 4-bit quantization.

Unknown type <model_type>

  • Explanation: This error is raised when an unsupported model_type is specified.
  • Solution: Set the model_type to flux, as it is the currently supported option within the Nunchaku framework.

Nunchaku Text Encoder Loader Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-nunchaku
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.