ComfyUI > Nodes > ComfyUI LLM Toolkit > Prompt Manager (LLMToolkit)

ComfyUI Node: Prompt Manager (LLMToolkit)

Class Name

PromptManager

Category
llm_toolkit
Author
comfy-deploy (Account age: 706days)
Extension
ComfyUI LLM Toolkit
Latest Updated
2025-10-01
Github Stars
0.08K

How to Install ComfyUI LLM Toolkit

Install this extension via the ComfyUI Manager by searching for ComfyUI LLM Toolkit
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI LLM Toolkit in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Prompt Manager (LLMToolkit) Description

Versatile tool for managing and structuring prompt components into a cohesive configuration dictionary for AI applications.

Prompt Manager (LLMToolkit):

The PromptManager node is a versatile tool designed to manage and structure various prompt components such as text, images, masks, and file paths into a cohesive configuration dictionary within a main context object. This node is particularly beneficial for AI artists and developers working with large language models (LLMs) as it allows for the seamless integration of multiple data types into a single prompt configuration. By accepting a range of optional inputs, the PromptManager can dynamically update the context with relevant data, ensuring that all necessary components are included for effective prompt generation. This functionality is crucial for creating complex and rich prompts that can be used in various AI applications, enhancing the overall creative process.

Prompt Manager (LLMToolkit) Input Parameters:

context

The context parameter is an optional input that allows you to provide an existing context dictionary to be updated with new prompt components. If no context is provided, a new one will be initialized. This parameter is crucial for maintaining continuity in prompt configurations across different nodes and processes. There are no specific minimum or maximum values, as it is a flexible dictionary structure.

text_prompt

The text_prompt parameter is a string input that allows you to include a textual component in your prompt configuration. This parameter supports multiline text, making it ideal for detailed descriptions or instructions. The default value is an empty string, and it can be customized to fit the specific needs of your prompt.

image

The image parameter is an optional input that accepts image tensors, provided that the necessary libraries (such as Torch) are available. This allows you to incorporate visual elements into your prompt configuration, enhancing the richness and diversity of the generated prompts. There are no specific default values, as it depends on the availability of image data.

mask

The mask parameter is similar to the image parameter but is specifically designed for mask tensors. This can be used to define specific areas of interest or focus within an image, providing additional context for the prompt. Like the image parameter, it requires the availability of certain libraries and does not have a default value.

video

The video parameter allows for the inclusion of video tensors in the prompt configuration. This is particularly useful for applications that require dynamic visual content. The parameter is optional and depends on the availability of video data and supporting libraries.

audio_path

The audio_path parameter is a string input that specifies the path to an audio file to be included in the prompt configuration. This allows for the integration of auditory elements, expanding the scope of the prompt. The default value is an empty string, and it can be customized to point to the desired audio file.

file_path

The file_path parameter is a string input that accepts comma-separated paths to various files, such as PDFs or videos. This parameter is essential for including external resources in the prompt configuration. The default value is an empty string, with a placeholder suggesting the format for multiple file paths.

url

The url parameter is a string input that allows you to include comma-separated URLs in the prompt configuration. This is useful for referencing online resources or APIs. The default value is an empty string, with a placeholder indicating the expected format for multiple URLs.

Prompt Manager (LLMToolkit) Output Parameters:

context

The context output parameter is a dictionary that contains the updated prompt configuration. This includes all the components that were successfully integrated into the prompt, such as text, images, masks, file paths, and URLs. The context is crucial for subsequent nodes or processes that rely on the prompt configuration, ensuring that all necessary data is available for further processing or execution.

Prompt Manager (LLMToolkit) Usage Tips:

  • Ensure that all necessary libraries, such as Torch, are installed and available to take full advantage of the image, mask, and video input capabilities of the PromptManager.
  • When providing file paths or URLs, use the suggested format for comma-separated values to ensure that all resources are correctly parsed and included in the prompt configuration.
  • Utilize the text_prompt parameter to provide detailed and descriptive text that complements the other components of your prompt, enhancing the overall effectiveness of the generated output.

Prompt Manager (LLMToolkit) Common Errors and Solutions:

"Torch/Numpy/PIL not found. IMAGE and MASK inputs disabled."

  • Explanation: This error occurs when the necessary libraries for handling image and mask inputs are not available, preventing the node from processing these types of data.
  • Solution: Ensure that the required libraries, such as Torch, Numpy, and PIL, are installed and properly configured in your environment.

"Existing 'prompt_config' in context is not a dict. Overwriting."

  • Explanation: This warning indicates that the existing prompt_config in the provided context is not a dictionary, which is the expected format for this node.
  • Solution: Verify that the context being passed to the PromptManager is correctly formatted as a dictionary, or allow the node to overwrite it with a new dictionary structure.

"Received non-dict context input. Wrapping it."

  • Explanation: This warning occurs when the input context is not a dictionary, prompting the node to wrap it in a new dictionary structure.
  • Solution: Ensure that the context input is a dictionary to avoid unnecessary wrapping and potential data loss.

Prompt Manager (LLMToolkit) Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI LLM Toolkit
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.