ComfyUI-Prompt-Manager Introduction
The ComfyUI-Prompt-Manager is a versatile extension designed to enhance your experience with the ComfyUI platform. It serves as a comprehensive toolset for organizing, generating, and enhancing prompts, making it an invaluable resource for AI artists. Initially developed as a simple prompt manager to save and retrieve prompts, it has evolved into a robust solution that also aids in generating prompts. This extension can help streamline your creative process by providing a structured way to manage your prompts and leverage advanced AI models to generate detailed and contextually rich prompts.
How ComfyUI-Prompt-Manager Works
At its core, the ComfyUI-Prompt-Manager operates by integrating with the ComfyUI platform to manage and generate prompts. It uses an existing installation of llama.cpp to prevent conflicts with ComfyUI. The extension allows you to organize prompts into categories, save and load them easily, and enhance them using local language models (LLMs). It can analyze images to generate descriptions or enhance text prompts with detailed descriptions. The extension supports up to five images at once for analysis, providing flexibility in how you use visual inputs to generate prompts.
ComfyUI-Prompt-Manager Features
Prompt Manager
- Category Organization: Organize your prompts into multiple categories for easy management.
- Save & Load Prompts: Save your favorite prompts with custom names and load them quickly when needed.
- LLM Input Toggle: Switch between using text outputs from other nodes or your internal prompts.
- Persistent Storage: All prompts are saved in your ComfyUI user folder, ensuring they are always accessible.
Prompt Generator
- Three Generation Modes: Choose from enhancing text prompts, analyzing images, or analyzing images with custom instructions.
- Prompt Enhancement: Use local LLMs to transform basic prompts into detailed descriptions.
- Vision Analysis: Generate detailed image descriptions using Qwen3VL models.
- Custom Image Analysis: Provide your own instructions for image analysis.
- JSON Output: Optionally output structured JSON with a scene breakdown.
- Thinking Support: Enable deeper generative reasoning with Thinking models.
- Automatic Server Management: Automatically manage the llama.cpp server, starting and stopping it as needed.
Prompt Generator Options
- Model Selection: Choose from local models or download Qwen3, Qwen3VL, and Qwen3VL Thinking models.
- Auto-Download: Automatically download models and required files for vision models.
- LLM Parameters: Fine-tune parameters like temperature, top_k, and context size.
- Custom Instructions: Override default system prompts for different enhancement styles.
- Extra Image Inputs: Combine up to five images to generate your prompt.
- Console Debugging: Output the entire process to the console for debugging.
Preference Options
- Set preferred models for both base and VL modes.
- Define a new default model location.
- Specify a custom location for Llama.cpp if not added to the system path.
ComfyUI-Prompt-Manager Models
The extension supports various models, including:
- Qwen3-1.7B-Q8_0.gguf: Fastest, lowest VRAM (~2GB)
- Qwen3-4B-Q8_0.gguf: Balanced performance (~4GB VRAM)
- Qwen3-8B-Q8_0.gguf: Best quality, highest VRAM (~8GB)
- Qwen3VL-4B-Instruct-Q8_0.gguf: Vision model, balanced performance (~5GB VRAM)
- Qwen3VL-8B-Instruct-Q8_0.gguf: Vision model, best quality (~9GB VRAM)
- Qwen3VL-4B-Thinking-Q8_0.gguf: Vision model, Thinking variant, balanced performance (~5GB VRAM)
- Qwen3VL-8B-Thinking-Q8_0.gguf: Vision model, Thinking variant, best quality (~9GB VRAM)
What's New with ComfyUI-Prompt-Manager
Version 1.8.3
- Added option to leave the Llama server running when closing ComfyUI.
Version 1.8.2
- Enhanced model management with a custom model path preference.
Version 1.8.1
- Added option to set a custom Llama path in preferences.
Version 1.8.0
- Support for Qwen3VL Thinking model variants.
- Improved model management and console output options.
Troubleshooting ComfyUI-Prompt-Manager
Problem: Prompts don't appear in the dropdown.
- Solution: Ensure the category has saved prompts. Create a new prompt if necessary. Problem: Changes aren't saved.
- Solution: Click the "Save Prompt" button after making changes. Problem: Can't see LLM output in the node.
- Solution: Ensure the LLM output is connected to the "llm_input" and run the workflow. Problem: "llama-server command not found".
- Solution: Install llama.cpp and ensure
llama-serveris available in the command line. Problem: "No models found". - Solution: Place a .gguf file in the
models/folder or connect the Prompt Generator Option node and select a model size to download. Problem: Server won't start. - Solution: Check that port 8080 is not in use and close any existing llama-server processes.
Learn More about ComfyUI-Prompt-Manager
For further assistance and resources, you can explore the following:
- ComfyUI GitHub Repository for more information on the ComfyUI platform.
- llama.cpp GitHub Repository for details on the llama.cpp integration.
- Community forums and tutorials available on the ComfyUI website to connect with other AI artists and developers.
