ComfyUI Node: Loop Aware VLM Accumulator

Class Name

LoopAwareVLMAccumulator

Category
VLM/Loop
Author
fblissjr (Account age: 4014days)
Extension
Shrug-Prompter: Unified VLM Integration for ComfyUI
Latest Updated
2025-09-30
Github Stars
0.02K

How to Install Shrug-Prompter: Unified VLM Integration for ComfyUI

Install this extension via the ComfyUI Manager by searching for Shrug-Prompter: Unified VLM Integration for ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Shrug-Prompter: Unified VLM Integration for ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Loop Aware VLM Accumulator Description

Accumulates VLM responses in loops, maintaining state for batch processing and iterative analysis.

Loop Aware VLM Accumulator:

The LoopAwareVLMAccumulator is a specialized node designed to function seamlessly within ForLoop structures, providing a robust mechanism for accumulating Visual Language Model (VLM) responses across multiple iterations. This node is particularly beneficial for scenarios where maintaining state across loop iterations is crucial, such as in batch processing or when handling multiple responses. It is compatible with the original BatchVLMAccumulator behavior, ensuring that it can handle both single and batch responses effectively. The node's primary goal is to accumulate responses while offering flexibility in how these responses are extracted and managed, making it an essential tool for AI artists who need to process and analyze large sets of data iteratively.

Loop Aware VLM Accumulator Input Parameters:

context

The context parameter accepts any type of context input, denoted by the wildcard ("*",). This flexibility allows the node to process a wide range of data inputs, making it adaptable to various use cases. The context is crucial as it forms the basis of the data that will be accumulated and processed within the loop.

accumulator_id

The accumulator_id is a string parameter that uniquely identifies the accumulator instance. By default, it is set to "default". This ID is essential for distinguishing between different accumulators, especially when multiple accumulations are happening simultaneously. It ensures that the correct data is accessed and manipulated during the loop execution.

reset

The reset parameter is a boolean that determines whether the accumulator should be reset at the start of the loop. By default, it is set to False. When set to True, it clears the current state of the accumulator, allowing for a fresh start in data accumulation. This is useful when you want to ensure that previous data does not interfere with the current loop's processing.

extract_mode

The extract_mode parameter offers three options: "all", "responses_only", and "first_response", with the default being "responses_only". This parameter controls how the accumulated responses are extracted and presented. Choosing the appropriate mode can optimize the node's performance based on the specific requirements of your task, such as whether you need all responses or just the first one.

clear_all

The clear_all parameter is a boolean that, when set to True, clears all accumulators across all IDs. By default, it is False. This option is particularly useful when you need to reset the entire accumulation state across different instances, ensuring no residual data affects new operations.

Loop Aware VLM Accumulator Output Parameters:

accumulator

The accumulator output provides the current state of the accumulator, which includes all the contexts and responses collected during the loop execution. This output is crucial for understanding the data that has been processed and is ready for further analysis or use.

responses

The responses output is a list containing all the responses accumulated during the loop. This output is particularly important for tasks that require analyzing or utilizing the responses generated by the VLM, providing a comprehensive view of the data collected.

total_count

The total_count output is an integer representing the total number of responses accumulated. This count is useful for understanding the scale of the data processed and can be used for validation or further processing steps.

debug_info

The debug_info output is a string that provides additional information about the accumulation process, which can be helpful for debugging and ensuring that the node is functioning as expected. It offers insights into the internal workings of the node, aiding in troubleshooting and optimization.

Loop Aware VLM Accumulator Usage Tips:

  • Use the reset parameter to ensure that each loop iteration starts with a clean state, preventing previous data from affecting current operations.
  • Select the appropriate extract_mode based on your task requirements. For instance, use "responses_only" if you only need the responses, which can improve performance by reducing unnecessary data processing.
  • Utilize the clear_all option when you need to reset all accumulators, especially in complex workflows involving multiple loops or accumulators.

Loop Aware VLM Accumulator Common Errors and Solutions:

Accumulator ID not found

  • Explanation: This error occurs when the specified accumulator_id does not exist in the current session.
  • Solution: Ensure that the accumulator_id is correctly specified and matches an existing accumulator. If necessary, initialize a new accumulator with the desired ID.

Index out of range

  • Explanation: This error happens when attempting to access a response index that exceeds the total number of accumulated responses.
  • Solution: Verify the index value used in response retrieval and ensure it is within the bounds of the total responses available. Adjust the index or check the total_count output to avoid this error.

Loop Aware VLM Accumulator Related Nodes

Go back to the extension to check out more related nodes.
Shrug-Prompter: Unified VLM Integration for ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Loop Aware VLM Accumulator