ComfyUI > Nodes > ComfyUI-Prompt-Manager > Prompt Generator

ComfyUI Node: Prompt Generator

Class Name

PromptGenerator

Category
Prompt Manager
Author
FranckyB (Account age: 4034days)
Extension
ComfyUI-Prompt-Manager
Latest Updated
2025-12-19
Github Stars
0.03K

How to Install ComfyUI-Prompt-Manager

Install this extension via the ComfyUI Manager by searching for ComfyUI-Prompt-Manager
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-Prompt-Manager in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Prompt Generator Description

Enhances AI prompts by refining text or analyzing images, optimizing creative interactions.

Prompt Generator:

The PromptGenerator is a versatile tool designed to enhance and analyze prompts for AI-driven creative processes. It serves as a bridge between user input and AI models, optimizing the interaction by refining text prompts or analyzing images with or without accompanying text. This node is particularly beneficial for AI artists looking to improve the quality and relevance of their prompts, ensuring that the AI model receives the most effective input for generating desired outputs. By offering different modes of operation, such as enhancing user prompts or analyzing images, the PromptGenerator provides flexibility and adaptability to various creative needs. Its ability to format outputs as structured JSON and enable reasoning modes further enhances its utility, making it an essential component for those seeking to maximize the potential of AI in artistic endeavors.

Prompt Generator Input Parameters:

mode

This parameter allows you to select the operational mode of the PromptGenerator. The available options are "Enhance User Prompt," "Analyze Image," and "Analyze Image with Prompt." Each mode tailors the node's functionality to specific tasks, such as refining text prompts or analyzing images. The default mode is "Enhance User Prompt," and it is crucial to choose the appropriate mode based on your creative needs.

prompt

The prompt parameter is a text input that serves as the basis for the node's operations. It is required for the "Enhance User Prompt" mode and optional for the "Analyze Image with Prompt" mode. This parameter allows you to input a multiline text prompt that the node will enhance or use in conjunction with image analysis. The default value is an empty string, and it is essential to provide a well-crafted prompt to achieve optimal results.

image

This parameter is used to input an image for analysis. It is required for the "Analyze Image" and "Analyze Image with Prompt" modes. By connecting an image, you enable the node to perform visual analysis, which can be combined with text prompts for more comprehensive results. The image input is crucial for tasks that involve visual content.

format_as_json

The format_as_json parameter is a boolean option that determines whether the output should be formatted as structured JSON with a scene breakdown. The default value is False. Enabling this option can be beneficial for users who require a detailed and organized output format for further processing or analysis.

enable_thinking

This boolean parameter enables the "thinking" or reasoning mode for compatible models, using the DeepSeek format. The default value is False. Activating this mode allows the node to perform more complex reasoning tasks, which can enhance the depth and quality of the generated output.

stop_server_after

The stop_server_after parameter is a boolean option that, when enabled, stops the llama.cpp server after each prompt. The default value is False. This option can help save resources, but it may slow down the process as the server needs to be restarted for each new prompt. It is useful for managing system resources effectively.

Prompt Generator Output Parameters:

full_response

The full_response output provides the complete response generated by the node based on the input parameters. This output is crucial for understanding the final result of the prompt enhancement or image analysis process. It reflects the node's interpretation and processing of the input data.

thinking_content

The thinking_content output contains any additional reasoning or thought processes generated by the node when the enable_thinking parameter is activated. This output is valuable for users who wish to gain insights into the node's reasoning capabilities and the underlying logic used to generate the final response.

Prompt Generator Usage Tips:

  • To optimize the node's performance, choose the appropriate mode based on your task. For text refinement, use "Enhance User Prompt," and for image-related tasks, select "Analyze Image" or "Analyze Image with Prompt."
  • When using the prompt parameter, ensure that your text is clear and concise to achieve the best enhancement results. A well-crafted prompt can significantly impact the quality of the output.
  • Consider enabling format_as_json if you need a structured output for further analysis or integration with other systems.
  • Use the enable_thinking option to explore the node's reasoning capabilities, which can provide deeper insights and more nuanced outputs.

Prompt Generator Common Errors and Solutions:

Error: Empty response — model likely ran out of context tokens

  • Explanation: This error occurs when the model exhausts its available context tokens, resulting in an empty response.
  • Solution: Consider increasing the context size or shortening the prompt to ensure the model has enough tokens to generate a complete response.

Warning: Empty response from server

  • Explanation: This warning indicates that the server returned an empty response, possibly due to insufficient input or server issues.
  • Solution: Verify that your input parameters are correctly set and that the server is running properly. Adjust the prompt or image input as needed to ensure a valid response.

Prompt Generator Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-Prompt-Manager
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.