ComfyUI Node: Dual Provider Config

Class Name

DualProviderConfig

Category
VLM/Config
Author
fblissjr (Account age: 4014days)
Extension
Shrug-Prompter: Unified VLM Integration for ComfyUI
Latest Updated
2025-09-30
Github Stars
0.02K

How to Install Shrug-Prompter: Unified VLM Integration for ComfyUI

Install this extension via the ComfyUI Manager by searching for Shrug-Prompter: Unified VLM Integration for ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Shrug-Prompter: Unified VLM Integration for ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Dual Provider Config Description

Configures two VLM providers for distinct tasks, optimizing AI model performance and workflow.

Dual Provider Config:

The DualProviderConfig node is designed to facilitate the configuration of two distinct Visual Language Model (VLM) providers for a two-round processing approach. This node allows you to specify different models for separate tasks, such as observation and rewriting, enabling a more tailored and efficient workflow. By configuring these settings, you can optimize the performance of your AI models, ensuring that each task is handled by the most suitable model. This flexibility is particularly beneficial in scenarios where different aspects of a project require specialized processing capabilities, thereby enhancing the overall quality and effectiveness of the output.

Dual Provider Config Input Parameters:

provider

The provider parameter allows you to select the VLM provider from a predefined list, which currently includes "openai". This selection determines the source of the AI model that will be used for processing. The default value is "openai", and this parameter is crucial for directing the node to the appropriate service for model execution.

base_url

The base_url parameter specifies the base URL of the API endpoint for the selected provider. It is a string input that defaults to "http://localhost:8080", which is suitable for local server setups. This parameter is essential for establishing a connection to the provider's API, and it must include the protocol (http:// or https://) to ensure proper communication.

api_key

The api_key parameter is used to authenticate requests to the provider's API. It is a string input that defaults to "not-required-for-local", indicating that an API key is not necessary for local server configurations. This parameter is vital for accessing the provider's services securely, especially when connecting to remote servers.

llm_model

The llm_model parameter allows you to specify the name of the language model to be used. It is a string input with a default value of "Enter model name (will auto-populate if server is reachable)". This parameter enables you to manually enter the model name or select from dynamically populated options, ensuring that the most appropriate model is used for the task at hand.

Dual Provider Config Output Parameters:

context

The context output parameter is a dictionary that encapsulates the provider configuration settings. This includes the provider name, base URL, API key, and model name. The context is crucial for downstream nodes as it provides all the necessary information to interact with the configured VLM provider, ensuring seamless integration and execution of tasks.

Dual Provider Config Usage Tips:

  • Ensure that the base_url includes the correct protocol (http:// or https://) to avoid connection issues with the provider's API.
  • When working with local servers, you can leave the api_key as its default value, but ensure it is set correctly for remote servers to avoid authentication errors.
  • Use the llm_model parameter to specify the exact model you wish to use, especially if the server supports multiple models, to ensure optimal performance for your specific task.

Dual Provider Config Common Errors and Solutions:

Warning: ShrugProviderSelector - API Key for <provider> is not set.

  • Explanation: This warning indicates that the API key is missing for the specified provider, which is necessary for authentication when connecting to remote servers.
  • Solution: Ensure that the api_key parameter is correctly set with a valid API key for the provider you are using. If you are working with a local server, you can ignore this warning.

Provider config: <provider> at <base_url> using model <clean_model>

  • Explanation: This message is not an error but a confirmation that the provider configuration has been successfully set up with the specified parameters.
  • Solution: No action is needed. This message confirms that the node is correctly configured and ready for use.

Dual Provider Config Related Nodes

Go back to the extension to check out more related nodes.
Shrug-Prompter: Unified VLM Integration for ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Dual Provider Config