ComfyUI > Nodes > ComfyUI LLM Toolkit > Display Text (LLMToolkit)

ComfyUI Node: Display Text (LLMToolkit)

Class Name

Display_Text

Category
llm_toolkit
Author
comfy-deploy (Account age: 706days)
Extension
ComfyUI LLM Toolkit
Latest Updated
2025-10-01
Github Stars
0.08K

How to Install ComfyUI LLM Toolkit

Install this extension via the ComfyUI Manager by searching for ComfyUI LLM Toolkit
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI LLM Toolkit in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Display Text (LLMToolkit) Description

Extract and display text from various inputs, focusing on language model responses for visualization and processing.

Display Text (LLMToolkit):

The Display_Text node is designed to extract and display text from a variety of input types, primarily focusing on responses from language models (LLMs). This node is particularly useful for AI artists and developers who need to visualize or process text data generated by LLMs. It allows you to select specific lines from the extracted text for output, making it easier to focus on particular parts of the response. The node is capable of handling different input formats, such as strings, dictionaries, and lists, and it ensures that the text is displayed in a user-friendly manner. By providing a structured way to manage and display text, the Display_Text node enhances the usability of LLM outputs in creative and technical projects.

Display Text (LLMToolkit) Input Parameters:

context

The context parameter accepts a wildcard input, meaning it can handle various data types such as strings, dictionaries, and lists. This flexibility allows the node to process text from different sources, including LLM responses. The node attempts to extract text from standard keys like llm_response, response, text, and content when the input is a dictionary. If the input is a list of strings, it joins them with newlines. If the input type is unexpected, it converts it to a string. This parameter is crucial as it determines the text that will be processed and displayed by the node.

select

The select parameter is a string that specifies which line of the extracted text to output. It uses a default value of "0", which means the first line will be selected by default. The parameter supports cycling through available lines using modulo arithmetic, allowing you to easily navigate through the text. This feature is particularly useful when dealing with multi-line responses, as it enables you to focus on specific lines without manually parsing the entire text.

Display Text (LLMToolkit) Output Parameters:

context

The context output returns the original input data, allowing you to pass through the initial data structure without modification. This is useful for maintaining the integrity of the input data for further processing or reference.

text_list

The text_list output provides a list of individual lines extracted from the input text. Each line is a separate string, making it easier to process or analyze specific parts of the text. This output is particularly beneficial when dealing with multi-line responses, as it breaks down the text into manageable pieces.

count

The count output indicates the number of lines in the text_list. This information is useful for understanding the structure of the text and for making decisions about which lines to focus on or display.

selected

The selected output returns the specific line chosen based on the select parameter. This allows you to extract and display a particular line from the text, which can be useful for highlighting important information or focusing on specific parts of a response.

text_full

The text_full output provides the complete extracted text as a single string. This output is useful for situations where you need to display or process the entire text without breaking it into individual lines.

Display Text (LLMToolkit) Usage Tips:

  • Use the select parameter to cycle through lines of text when dealing with multi-line responses, allowing you to focus on specific parts of the output.
  • Ensure that the input context is correctly formatted, as the node attempts to extract text from specific keys in dictionaries or joins lists of strings.
  • Utilize the text_list output to analyze or process individual lines of text, which can be helpful for detailed examination of LLM responses.

Display Text (LLMToolkit) Common Errors and Solutions:

text_to_display is None unexpectedly.

  • Explanation: This error occurs when the extracted text is unexpectedly None, which should not happen with a default empty string.
  • Solution: Ensure that the input context is correctly formatted and contains valid text data. Check for any issues in the data extraction process.

text_to_display is not a string after extraction

  • Explanation: This error indicates that the extracted text is not a string, which is unexpected if the extraction process works correctly.
  • Solution: Verify that the input context is of a compatible type and that the extraction logic is functioning as intended. Consider adding additional checks or fallbacks for non-string inputs.

Display Text (LLMToolkit) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI LLM Toolkit
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.