ComfyUI > Nodes > ComfyUI-cluster > Ollama Prompt Planner

ComfyUI Node: Ollama Prompt Planner

Class Name

OllamaPromptPlanner

Category
Ollama/Planner
Author
GeekatplayStudio (Account age: 4275days)
Extension
ComfyUI-cluster
Latest Updated
2026-02-13
Github Stars
0.02K

How to Install ComfyUI-cluster

Install this extension via the ComfyUI Manager by searching for ComfyUI-cluster
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-cluster in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Ollama Prompt Planner Description

Sophisticated node optimizing model selection based on prompts and registry for AI artists to streamline workflow and enhance creative output.

Ollama Prompt Planner:

The OllamaPromptPlanner is a sophisticated node designed to optimize the selection of model checkpoints and parameters based on user prompts and a predefined registry. Its primary function is to leverage local Ollama resources to intelligently choose the most suitable model configurations, including LoRAs and other parameters, to achieve the desired artistic output. This node is particularly beneficial for AI artists who wish to streamline their workflow by automating the model selection process, ensuring that the chosen models align closely with the creative intent expressed in the prompts. By analyzing the prompt and utilizing a registry of available models, the OllamaPromptPlanner enhances the efficiency and effectiveness of the creative process, allowing artists to focus more on their artistic vision rather than the technical intricacies of model selection.

Ollama Prompt Planner Input Parameters:

prompt

The prompt parameter is a string input that allows you to specify the creative direction or theme you wish to explore. This input is crucial as it guides the node in selecting the appropriate model and parameters that align with your artistic vision. The prompt can be multiline, providing flexibility in expressing complex ideas or themes.

ollama_model

The ollama_model parameter specifies the model to be used for processing the prompt. It defaults to "qwen2.5:7b" and is a string input. This parameter is essential as it determines the foundational model that will interpret the prompt and generate the artistic output.

registry_path

The registry_path parameter is a string input that points to the JSON file containing the registry of available models and their configurations. The default value is "model_registry.json". This parameter is vital for the node to access and evaluate the available models to find the best match for the given prompt.

task_hint

The task_hint parameter provides guidance on the type of task you intend to perform, such as "auto", "text2img", "img2img", "inpaint", "sdxl", "sd15", or "flux". This parameter helps the node tailor its model selection process to suit the specific task, enhancing the relevance and quality of the output.

user_negative

The user_negative parameter allows you to specify any negative aspects or elements you wish to avoid in the output. It is a multiline string input with a default empty value. This parameter helps refine the output by excluding unwanted features or themes.

aspect_ratio

The aspect_ratio parameter defines the desired aspect ratio of the output image. It offers a range of options, including "1:1", "3:2", "2:3", "4:3", "3:4", "16:9", "9:16", "21:9", "9:21", "2:1", "1:2", "5:3", "3:5", "4:5", and "5:4". This parameter ensures that the output image fits the intended visual format.

base_size

The base_size parameter specifies the base size of the output image in pixels. It is an integer input with a default value of 1024, and it can range from 256 to 2048, with increments of 64. This parameter determines the resolution of the output image, affecting its detail and clarity.

ollama_host

The ollama_host parameter is a string input that specifies the host address for the Ollama service, defaulting to "localhost". This parameter is necessary for establishing a connection with the Ollama service to process the prompt.

ollama_port

The ollama_port parameter defines the port number for the Ollama service connection. It is an integer input with a default value of 11434, and it can range from 1 to 65535. This parameter is crucial for ensuring proper communication with the Ollama service.

max_vram

The max_vram parameter specifies the maximum VRAM available for processing, with options including 24, 16, 12, 8, and 6, and a default value of 24. This parameter impacts the node's ability to handle complex models and large images, influencing the performance and speed of the processing.

Ollama Prompt Planner Output Parameters:

plan

The plan output parameter is a dictionary that contains the selected model checkpoint, LoRAs, and other parameters based on the input prompt and registry. This output is crucial as it provides a detailed configuration that aligns with the artistic intent, ensuring that the generated output meets the desired creative standards.

Ollama Prompt Planner Usage Tips:

  • Ensure that your prompt is clear and descriptive to guide the node in selecting the most appropriate model and parameters for your artistic vision.
  • Utilize the task_hint parameter to specify the type of task you are performing, as this will help the node tailor its model selection process to suit your specific needs.
  • Adjust the base_size and aspect_ratio parameters to match the intended visual format and resolution of your output image, ensuring it fits your creative requirements.

Ollama Prompt Planner Common Errors and Solutions:

URLError

  • Explanation: This error occurs when there is a problem connecting to the Ollama service, possibly due to incorrect host or port settings.
  • Solution: Verify that the ollama_host and ollama_port parameters are correctly configured and that the Ollama service is running and accessible.

JSONDecodeError

  • Explanation: This error indicates that the response from the Ollama service could not be parsed as valid JSON, possibly due to malformed data.
  • Solution: Check the integrity of the data being sent to and received from the Ollama service. Ensure that the service is functioning correctly and returning valid JSON responses.

Ollama Prompt Planner Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-cluster
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.