ComfyUI Node: Text Prompt

Class Name

PureText

Category
AttentionDistillationWrapper
Author
zichongc (Account age: 828days)
Extension
ComfyUI-Attention-Distillation
Latest Updated
2025-03-18
Github Stars
0.11K

How to Install ComfyUI-Attention-Distillation

Install this extension via the ComfyUI Manager by searching for ComfyUI-Attention-Distillation
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-Attention-Distillation in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Text Prompt Description

Facilitates encoding textual data for AI conditioning, integrating complex text prompts into creative workflows using models like CLIP.

Text Prompt:

PureText is a node designed to facilitate the encoding of textual data into a format that can be used for advanced conditioning in AI models. This node is particularly useful for AI artists who want to integrate complex text inputs into their creative workflows, allowing for dynamic and multiline text prompts. By leveraging the capabilities of models like CLIP, PureText can tokenize and encode text inputs, making them suitable for further processing in AI-driven applications. This node is essential for those looking to enhance their projects with sophisticated text-based conditioning, providing a seamless way to convert raw text into a structured format that AI models can understand and utilize effectively.

Text Prompt Input Parameters:

clip

The clip parameter refers to the CLIP model, which is used to tokenize and encode the text inputs. This parameter is crucial as it determines how the text is processed and converted into tokens that the model can understand. The CLIP model is known for its ability to handle various types of text inputs, making it a versatile choice for encoding tasks.

bert

The bert parameter is a string input that supports multiline and dynamic prompts. It allows you to input text that will be tokenized by the CLIP model. This parameter is essential for providing the textual data that you want to encode, and it can include complex and varied text structures to suit your creative needs.

mt5xl

Similar to the bert parameter, mt5xl is another string input that supports multiline and dynamic prompts. It is used to input additional text that will be tokenized and encoded. This parameter provides flexibility in handling multiple text inputs, allowing for a richer and more diverse set of text data to be processed.

Text Prompt Output Parameters:

CONDITIONING

The output of the PureText node is a CONDITIONING parameter, which represents the encoded form of the input text. This output is crucial as it provides the structured data that AI models require for further processing. The CONDITIONING output ensures that the text inputs are transformed into a format that can be effectively used in various AI applications, enabling enhanced text-based conditioning and interaction.

Text Prompt Usage Tips:

  • Utilize multiline and dynamic prompts in the bert and mt5xl parameters to create more complex and varied text inputs, which can lead to richer conditioning results.
  • Experiment with different text structures and content in the bert and mt5xl inputs to see how they affect the conditioning output, allowing you to fine-tune the text encoding process for your specific needs.

Text Prompt Common Errors and Solutions:

Tokenization Error

  • Explanation: This error occurs when the text input cannot be properly tokenized by the CLIP model, possibly due to unsupported characters or formatting issues.
  • Solution: Ensure that the text input is free of unsupported characters and follows the expected formatting guidelines. Simplifying the text or breaking it into smaller parts may also help resolve this issue.

Encoding Failure

  • Explanation: An encoding failure might happen if the text input is too complex or exceeds the model's capacity to process it.
  • Solution: Try reducing the complexity of the text input or splitting it into smaller segments. Additionally, verify that the input text is within the model's token limit.

Text Prompt Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-Attention-Distillation
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.