ComfyUI > Nodes > ComfyUi-TextEncodeEditAdvanced > TextEncodeEditAdvanced

ComfyUI Node: TextEncodeEditAdvanced

Class Name

TextEncodeEditAdvanced

Category
conditioning/qwen_image_edit
Author
BigStationW (Account age: 0days)
Extension
ComfyUi-TextEncodeEditAdvanced
Latest Updated
2026-03-16
Github Stars
0.05K

How to Install ComfyUi-TextEncodeEditAdvanced

Install this extension via the ComfyUI Manager by searching for ComfyUi-TextEncodeEditAdvanced
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUi-TextEncodeEditAdvanced in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

TextEncodeEditAdvanced Description

Enhances text-to-image generation by encoding prompts into embeddings for precise control.

TextEncodeEditAdvanced:

The TextEncodeEditAdvanced node is designed to enhance the process of encoding text prompts into embeddings that can guide image generation models, such as diffusion models. This node leverages the capabilities of CLIP models to transform textual descriptions into a format that can influence the visual output of AI art generation. By integrating advanced text encoding techniques, it allows for more nuanced and precise control over the generated images, making it an invaluable tool for AI artists seeking to translate complex ideas and narratives into visual art. The node's primary goal is to provide a robust mechanism for embedding text prompts, ensuring that the resulting images closely align with the artist's vision.

TextEncodeEditAdvanced Input Parameters:

conditioning

The conditioning parameter is a required input that represents the initial state or context in which the text encoding will be applied. It serves as the foundation upon which the text prompt will be encoded, influencing the final output. This parameter is crucial as it determines the baseline from which the text prompt will modify or enhance the image generation process.

max_images_allowed

The max_images_allowed parameter specifies the maximum number of images that can be processed alongside the text encoding. It accepts values from "0" to "3", with a default value of "3". This parameter controls how many reference images can be incorporated into the conditioning process, allowing for additional context or inspiration to be drawn from existing visuals. By limiting the number of images, it ensures that the node operates efficiently and within the desired scope.

vae

The vae parameter is optional and refers to the Variational Autoencoder model used to encode reference images into latent space. This parameter is essential when you want to include image references in the conditioning process, as it transforms the images into a format that can be integrated with the text embeddings. The presence of a VAE model enhances the node's ability to blend textual and visual inputs seamlessly.

image1

The image1 parameter is an optional input that allows you to provide the first reference image for conditioning. This image, if provided, will be encoded using the VAE model and incorporated into the conditioning process. It serves as a visual reference that can influence the final output, adding depth and context to the text prompt.

image2

Similar to image1, the image2 parameter is an optional input for a second reference image. It provides additional visual context and can be used to further refine the conditioning process. The inclusion of multiple images allows for a richer and more diverse set of influences on the generated output.

image3

The image3 parameter is the third optional input for a reference image. Like the previous image parameters, it offers another layer of visual context that can be encoded and integrated into the conditioning process. This flexibility in incorporating multiple images enables more complex and detailed artistic expressions.

TextEncodeEditAdvanced Output Parameters:

conditioning

The output conditioning parameter represents the modified conditioning state after the text prompt and any reference images have been encoded and integrated. This output is crucial as it contains the embedded text and image references that will guide the image generation model. The conditioning output ensures that the generated images align closely with the intended artistic vision, reflecting both the textual and visual inputs provided.

TextEncodeEditAdvanced Usage Tips:

  • To achieve the best results, ensure that the text prompt is clear and descriptive, as this will directly influence the quality and relevance of the generated images.
  • When using reference images, select visuals that closely align with the desired outcome to provide strong contextual guidance for the model.

TextEncodeEditAdvanced Common Errors and Solutions:

ERROR: clip input is invalid: None

  • Explanation: This error occurs when the CLIP model input is missing or invalid, preventing the text encoding process from proceeding.
  • Solution: Ensure that a valid CLIP model is provided as input. If the CLIP model is sourced from a checkpoint loader node, verify that the checkpoint contains a valid CLIP or text encoder model.

TextEncodeEditAdvanced Related Nodes

Go back to the extension to check out more related nodes.
ComfyUi-TextEncodeEditAdvanced
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

TextEncodeEditAdvanced