ComfyUI Node: Advanced VLM Sampler

Class Name

AdvancedVLMSampler

Category
VLM/Advanced
Author
fblissjr (Account age: 4014days)
Extension
Shrug-Prompter: Unified VLM Integration for ComfyUI
Latest Updated
2025-09-30
Github Stars
0.02K

How to Install Shrug-Prompter: Unified VLM Integration for ComfyUI

Install this extension via the ComfyUI Manager by searching for Shrug-Prompter: Unified VLM Integration for ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Shrug-Prompter: Unified VLM Integration for ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Advanced VLM Sampler Description

AdvancedVLMSampler offers full control over VLM sampling parameters for precise customization.

Advanced VLM Sampler:

The AdvancedVLMSampler is a sophisticated node designed to provide you with full control over the sampling parameters when working with Visual Language Models (VLMs). This node is particularly beneficial for those who need to fine-tune the behavior of their VLMs by exposing all parameters available in the heylookitsanllm API. By allowing you to adjust a wide range of settings, the AdvancedVLMSampler enables precise customization of the sampling process, which can lead to more accurate and contextually relevant outputs. Whether you are generating text based on visual inputs or refining the performance of your model, this node offers the flexibility and control necessary to achieve your desired results.

Advanced VLM Sampler Input Parameters:

provider_config

The provider_config parameter is essential as it contains the configuration details for the VLM provider. This includes the provider's name, base URL, API key, and the specific language model to be used. Without this configuration, the node cannot function, as it relies on these details to connect to the appropriate VLM service. Ensure that this parameter is correctly set up by connecting a VLMProviderConfig node.

system_prompt

The system_prompt parameter is used to define the initial context or instructions for the VLM. It sets the stage for the interaction and can significantly influence the model's responses. This parameter should be crafted carefully to guide the model's behavior effectively.

user_prompt

The user_prompt parameter represents the input or query from the user. It is the primary text that the VLM will respond to, and its content will directly affect the output generated by the model. Ensure that this prompt is clear and concise to obtain the best results.

images

The images parameter allows you to provide visual inputs to the VLM. These images are processed and converted into base64 format before being sent to the model. Including images can enhance the context and relevance of the model's responses, especially in tasks that require visual understanding.

processing_mode

The processing_mode parameter determines how the VLM processes the input data. It can affect the speed and accuracy of the model's responses. Choose a mode that aligns with your specific requirements for performance and output quality.

max_tokens

The max_tokens parameter sets the maximum number of tokens that the model can generate in its response. This limits the length of the output and can help manage computational resources. Adjust this parameter based on the desired verbosity of the model's responses.

stream

The stream parameter, when enabled, allows the model to send partial responses as they are generated. This can be useful for applications that require real-time feedback or interaction. Consider enabling this option if immediate response is a priority.

include_performance

The include_performance parameter, when set to true, includes performance metrics in the output. This can provide insights into the model's efficiency and help identify areas for optimization. Use this parameter to monitor and improve the model's performance.

timeout

The timeout parameter specifies the maximum time allowed for the model to generate a response. This can prevent the system from hanging indefinitely and ensures timely outputs. Set an appropriate timeout value based on your application's requirements.

temperature

The temperature parameter controls the randomness of the model's responses. A higher temperature results in more varied outputs, while a lower temperature produces more deterministic results. Adjust this parameter to balance creativity and consistency in the model's responses.

top_p

The top_p parameter, also known as nucleus sampling, limits the model's output to the most probable tokens whose cumulative probability is below a certain threshold. This can help produce more coherent and contextually relevant responses. Set this parameter to fine-tune the diversity of the model's outputs.

top_k

The top_k parameter restricts the model's output to the top-k most likely tokens. This can help focus the model's responses and reduce noise. Use this parameter to control the precision of the model's outputs.

min_p

The min_p parameter sets a minimum probability threshold for tokens to be considered in the model's output. This can help filter out unlikely or irrelevant responses. Adjust this parameter to enhance the quality of the model's outputs.

repetition_penalty

The repetition_penalty parameter discourages the model from repeating the same tokens or phrases. This can improve the diversity and originality of the model's responses. Use this parameter to avoid redundancy in the model's outputs.

repetition_context_size

The repetition_context_size parameter defines the size of the context window used to apply the repetition penalty. This can affect how the model evaluates repetition in its responses. Set this parameter to control the scope of repetition detection.

seed

The seed parameter allows you to set a specific seed for the random number generator used in the sampling process. This can ensure reproducibility of the model's outputs. Use this parameter to obtain consistent results across different runs.

Advanced VLM Sampler Output Parameters:

context

The context output provides the updated context after the sampling process. It includes the responses generated by the model and any additional information relevant to the interaction. This output is useful for tracking the state of the conversation or task.

responses

The responses output contains the list of all responses generated by the model. This includes both the primary response and any additional outputs produced during the sampling process. Use this output to access the full range of the model's responses.

first_response

The first_response output provides the initial response generated by the model. This is often the most relevant or important output, especially in tasks that require a single answer. Use this output to quickly access the primary result of the sampling process.

avg_time_per_token

The avg_time_per_token output indicates the average time taken to generate each token in the model's response. This metric can provide insights into the model's efficiency and help identify performance bottlenecks. Use this output to monitor and optimize the model's speed.

debug_info

The debug_info output contains any debugging information generated during the sampling process. This can include details about the number of responses generated and other relevant metrics. Use this output to troubleshoot and refine the model's behavior.

sampler_config

The sampler_config output provides the configuration settings used during the sampling process. This includes both default and non-default parameters, offering a comprehensive view of the sampling setup. Use this output to review and adjust the model's configuration as needed.

Advanced VLM Sampler Usage Tips:

  • Ensure that the provider_config is correctly set up by connecting a VLMProviderConfig node to avoid configuration errors.
  • Use the temperature and top_p parameters to balance creativity and coherence in the model's responses, adjusting them based on the desired output style.
  • Enable the stream parameter for applications that require real-time interaction, allowing the model to send partial responses as they are generated.
  • Monitor the avg_time_per_token output to identify performance bottlenecks and optimize the model's speed for time-sensitive applications.

Advanced VLM Sampler Common Errors and Solutions:

Provider config required. Connect a VLMProviderConfig node.

  • Explanation: This error occurs when the provider_config parameter is not set, preventing the node from connecting to the VLM service.
  • Solution: Ensure that a VLMProviderConfig node is connected and properly configured with the necessary provider details.

Invalid temperature value

  • Explanation: This error indicates that the temperature parameter is set to an invalid value, which can affect the randomness of the model's responses.
  • Solution: Verify that the temperature parameter is set to a valid value (greater than or equal to 0) to ensure proper functioning of the node.

Timeout exceeded

  • Explanation: This error occurs when the model takes too long to generate a response, exceeding the specified timeout value.
  • Solution: Increase the timeout parameter to allow more time for the model to generate a response, or optimize the model's configuration to improve its speed.

Advanced VLM Sampler Related Nodes

Go back to the extension to check out more related nodes.
Shrug-Prompter: Unified VLM Integration for ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Advanced VLM Sampler