ComfyUI Node: ✨ Auto-LLM-Text

Class Name

Auto-LLM-Text

Category
🧩 Auto-Prompt-LLM
Author
xlinx (Account age: 4822days)
Extension
ComfyUI-decadetw-auto-prompt-llm
Latest Updated
2025-02-01
Github Stars
0.02K

How to Install ComfyUI-decadetw-auto-prompt-llm

Install this extension via the ComfyUI Manager by searching for ComfyUI-decadetw-auto-prompt-llm
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-decadetw-auto-prompt-llm in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

✨ Auto-LLM-Text Description

Enhance creative projects with AI-generated text prompts in ComfyUI using Auto-LLM-Text node.

✨ Auto-LLM-Text:

The Auto-LLM-Text node is designed to enhance your creative projects by leveraging the power of language models to generate and refine text prompts. This node is part of the Auto-Prompt-LLM suite, which integrates seamlessly with ComfyUI to provide a robust framework for text generation and manipulation. The primary goal of Auto-LLM-Text is to assist you in crafting detailed and contextually relevant text prompts that can be used in various AI-driven applications, such as art generation, storytelling, and more. By utilizing advanced language models, this node can help you generate text that is not only coherent but also aligned with your creative vision, making it an invaluable tool for AI artists looking to push the boundaries of their work.

✨ Auto-LLM-Text Input Parameters:

clip

The clip parameter is used to provide a context or a reference point for the language model to generate text. It acts as a seed or a starting point for the text generation process, ensuring that the output is relevant to the initial input. This parameter is crucial for maintaining coherence and relevance in the generated text.

text_prompt_postive

This parameter allows you to input a positive text prompt that guides the language model towards generating text with a specific tone or direction. It helps in emphasizing certain aspects or themes in the generated text, making it more aligned with your creative intent.

text_prompt_negative

The text_prompt_negative parameter is used to specify elements or themes that should be avoided in the generated text. By providing a negative prompt, you can steer the language model away from certain topics or tones, ensuring that the output aligns with your desired outcome.

llm_apiurl

This parameter specifies the API URL of the language model service you are using. It is essential for establishing a connection with the language model, allowing the node to send requests and receive generated text.

llm_apikey

The llm_apikey is a security credential required to authenticate your requests to the language model API. It ensures that only authorized users can access the language model's capabilities, protecting your data and usage.

llm_api_model_name

This parameter defines the specific model name you wish to use for text generation. Different models may have varying capabilities and characteristics, so selecting the appropriate model is crucial for achieving the desired results.

llm_text_max_token

The llm_text_max_token parameter sets the maximum number of tokens that the language model can generate in a single response. This helps control the length of the generated text, ensuring it is concise and within the desired limits.

llm_text_tempture

This parameter controls the randomness of the text generation process. A higher temperature value results in more creative and diverse outputs, while a lower value produces more deterministic and focused text. Adjusting this parameter allows you to fine-tune the balance between creativity and coherence.

llm_text_result_append_enabled

The llm_text_result_append_enabled parameter determines whether the generated text should be appended to the existing text or replace it entirely. This option provides flexibility in how the generated text is integrated into your project.

llm_text_system_prompt

This parameter allows you to set a system-level prompt that guides the overall behavior and tone of the language model. It acts as a high-level directive, influencing the style and approach of the generated text.

llm_text_ur_prompt

The llm_text_ur_prompt parameter is used to provide a user-specific prompt that further refines the text generation process. It allows for personalized input, ensuring that the generated text aligns closely with your individual preferences and requirements.

✨ Auto-LLM-Text Output Parameters:

postive

The postive output parameter provides the generated text that aligns with the positive prompt input. It reflects the themes and tones emphasized in the positive prompt, offering a coherent and contextually relevant output.

negative

This output parameter contains the generated text that considers the negative prompt input, ensuring that undesired themes or tones are avoided. It helps maintain the integrity of the creative vision by steering clear of specified elements.

orignal-postive

The orignal-postive output provides the initial positive prompt text, allowing you to compare it with the generated output and assess the effectiveness of the language model in capturing the intended themes.

orignal-negative

This output parameter contains the initial negative prompt text, serving as a reference point for evaluating how well the language model avoided the specified themes or tones.

🌀LLM-Text

The 🌀LLM-Text output delivers the final generated text after processing both positive and negative prompts. It represents the culmination of the text generation process, offering a refined and contextually appropriate output.

✨ Auto-LLM-Text Usage Tips:

  • Experiment with different llm_text_tempture values to find the right balance between creativity and coherence for your specific project needs.
  • Use the llm_text_max_token parameter to control the length of the generated text, ensuring it fits within your project's requirements.
  • Leverage the llm_text_system_prompt to set a consistent tone and style across multiple text generation tasks, maintaining a unified creative direction.

✨ Auto-LLM-Text Common Errors and Solutions:

Invalid API Key

  • Explanation: The API key provided is incorrect or expired, preventing access to the language model service.
  • Solution: Verify that the API key is correct and active. If necessary, obtain a new key from the service provider.

Connection Timeout

  • Explanation: The connection to the language model API timed out, possibly due to network issues or server overload.
  • Solution: Check your internet connection and try again. If the issue persists, contact the service provider for support.

Model Not Found

  • Explanation: The specified model name does not exist or is unavailable, leading to a failure in the text generation process.
  • Solution: Double-check the model name for typos and ensure it is available in the service you are using.

✨ Auto-LLM-Text Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-decadetw-auto-prompt-llm
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.