ComfyUI > Nodes > Comfyui-SecNodes > SeC Video Segmentation

ComfyUI Node: SeC Video Segmentation

Class Name

SeCVideoSegmentation

Category
SeC
Author
9nate-drake (Account age: 2190days)
Extension
Comfyui-SecNodes
Latest Updated
2025-10-18
Github Stars
0.34K

How to Install Comfyui-SecNodes

Install this extension via the ComfyUI Manager by searching for Comfyui-SecNodes
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfyui-SecNodes in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

SeC Video Segmentation Description

Sophisticated node for concept-driven video object segmentation using Vision-Language Models for efficient object tracking in video frames.

SeC Video Segmentation:

SeCVideoSegmentation is a sophisticated node designed for concept-driven video object segmentation, leveraging the power of Large Vision-Language Models to extract visual concepts. This node intelligently combines visual features with semantic reasoning to perform video object segmentation, making it highly effective for tasks that require understanding and tracking objects across video frames. By supporting multiple prompt types, such as points, bounding boxes, and masks, SeCVideoSegmentation adapts its computational effort based on the complexity of the scene, ensuring efficient processing. This capability allows you to achieve robust object tracking by providing visual prompts, which the node uses to automatically comprehend the object concept, making it an invaluable tool for AI artists looking to enhance their video editing and analysis projects.

SeC Video Segmentation Input Parameters:

model

The model parameter represents the pre-trained model used for video segmentation. It is crucial for the node's operation as it contains the learned weights and architecture necessary for understanding and segmenting objects in video frames. The model should be compatible with the SeCVideoSegmentation node and is typically loaded using a Model Loader node. There are no specific minimum or maximum values, but the model must be correctly configured and loaded.

frames

The frames parameter is a 4D tensor representing the video frames to be processed. It must have the shape [batch, height, width, channels], where each frame is an image in the video sequence. This parameter is essential as it provides the visual data for segmentation. The minimum requirement is at least one frame, and the tensor must not be empty.

positive_points

The positive_points parameter allows you to specify points in the video frames that are positively associated with the object of interest. These points help guide the segmentation process by indicating areas that should be included in the object mask. This parameter is optional and can be left empty if not needed.

negative_points

The negative_points parameter is used to specify points in the video frames that are negatively associated with the object of interest. These points help refine the segmentation by indicating areas that should be excluded from the object mask. Like positive_points, this parameter is optional and can be left empty if not needed.

bbox

The bbox parameter allows you to define a bounding box around the object of interest in the video frames. This provides a more structured prompt for the segmentation process, helping the model focus on a specific region. The bounding box is optional and can be omitted if not required.

input_mask

The input_mask parameter is an optional mask that can be provided to guide the segmentation process. It represents areas in the video frames that are already known to belong to the object of interest, helping to refine the segmentation results.

tracking_direction

The tracking_direction parameter specifies the direction in which the object tracking should occur. It can be set to "forward" to track objects from the start to the end of the video or "backward" to track from the end to the start. This parameter helps control the flow of the segmentation process.

annotation_frame_idx

The annotation_frame_idx parameter indicates the index of the frame where the initial annotation or prompt is provided. It must be a non-negative integer, as it determines the starting point for the segmentation process.

object_id

The object_id parameter assigns a unique identifier to the object being segmented. This helps distinguish between different objects in the video and is particularly useful when multiple objects are being tracked simultaneously.

max_frames_to_track

The max_frames_to_track parameter limits the number of frames to be processed for object tracking. A value of -1 indicates that all frames should be tracked. This parameter helps manage computational resources by restricting the scope of the segmentation task.

mllm_memory_size

The mllm_memory_size parameter controls the memory size used by the model's multimodal language model (MLLM) during segmentation. It affects the model's ability to retain information across frames, with a default value of 12. Adjusting this parameter can impact the segmentation quality and performance.

offload_video_to_cpu

The offload_video_to_cpu parameter is a boolean flag that determines whether video processing should be offloaded to the CPU. This can be useful for managing GPU resources, especially when dealing with large video files or limited GPU memory.

auto_unload_model

The auto_unload_model parameter is a boolean flag that specifies whether the model should be automatically unloaded from memory after processing. This helps free up resources and is particularly useful when working with multiple models or limited memory.

SeC Video Segmentation Output Parameters:

masks

The masks output parameter provides the segmented masks for the objects in the video frames. Each mask is a binary image indicating the presence of the object in the corresponding frame. These masks are crucial for visualizing and analyzing the segmented objects, allowing you to see the results of the segmentation process.

object_ids

The object_ids output parameter contains the unique identifiers for the objects that have been segmented in the video. These identifiers help distinguish between different objects, especially when multiple objects are being tracked simultaneously. They are essential for understanding which mask corresponds to which object.

SeC Video Segmentation Usage Tips:

  • Ensure that the model is correctly loaded and compatible with the SeCVideoSegmentation node to avoid runtime errors.
  • Use clear and precise visual prompts, such as points or bounding boxes, to improve the accuracy of the segmentation results.
  • Adjust the mllm_memory_size parameter to balance between memory usage and segmentation quality, especially for complex scenes.
  • Consider offloading video processing to the CPU if you encounter GPU memory limitations, but be aware that this may impact processing speed.

SeC Video Segmentation Common Errors and Solutions:

"Model has been unloaded and auto-reload failed."

  • Explanation: This error occurs when the model has been unloaded from memory and the automatic reload process fails.
  • Solution: Reload the model using a fresh Model Loader node to ensure compatibility and proper loading.

"Frames tensor is empty. Please provide at least one frame."

  • Explanation: This error indicates that the frames tensor provided to the node is empty or not properly formatted.
  • Solution: Ensure that the frames tensor is correctly populated with at least one frame and follows the required 4D shape.

"annotation_frame_idx must be >= 0"

  • Explanation: This error occurs when the annotation_frame_idx parameter is set to a negative value.
  • Solution: Set the annotation_frame_idx to a non-negative integer to specify a valid starting frame for annotation.

SeC Video Segmentation Related Nodes

Go back to the extension to check out more related nodes.
Comfyui-SecNodes
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.