ComfyUI > Nodes > ComfyUI-MatAnyone > MatAnyone2

ComfyUI Node: MatAnyone2

Class Name

MatAnyone2

Category
MatAnyone
Author
Fuou Marinas (Account age: 2339days)
Extension
ComfyUI-MatAnyone
Latest Updated
2026-03-15
Github Stars
0.1K

How to Install ComfyUI-MatAnyone

Install this extension via the ComfyUI Manager by searching for ComfyUI-MatAnyone
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-MatAnyone in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

MatAnyone2 Description

Advanced video matting model for precise object isolation and background manipulation in editing.

MatAnyone2:

MatAnyone2 is a sophisticated video matting model designed to perform high-quality video segmentation and matting tasks. It is particularly adept at isolating objects within video frames, allowing for precise background removal or replacement. The model leverages advanced neural network architectures, including pixel encoders, mask encoders, and transformers, to process video frames and generate accurate alpha mattes. These alpha mattes are crucial for creating seamless composites in video editing and special effects. MatAnyone2 is built to handle both single-object and multi-object scenarios, providing flexibility for various video editing needs. Its design incorporates several components that work together to ensure efficient and effective matting, making it a valuable tool for AI artists and video editors looking to enhance their creative projects with professional-grade video matting capabilities.

MatAnyone2 Input Parameters:

vframes

vframes is a tensor representing the video frames to be processed. It is expected to be in the format (T, C, H, W), where T is the number of frames, C is the number of color channels (typically 3 for RGB), and H and W are the height and width of the frames. This parameter is crucial as it provides the raw video data that the model will analyze to perform matting.

mask

mask is a tensor that serves as an initial mask for the video frames. It can be in the format (H, W) or (1, H, W) and can contain float values between 0 and 1 or integer values. This mask helps guide the model in identifying the regions of interest within the frames, which is essential for accurate matting.

processor

processor is an instance of the InferenceCore class, which manages the inference process. It coordinates the various components of the model to ensure that the matting is performed efficiently and accurately.

mask_frame

mask_frame is an integer that specifies the index of the frame for which the mask is applied. It helps the model focus on a specific frame when initializing the matting process. The default value is 0, and it must be within the range of available frames.

n_warmup

n_warmup is an integer that determines the number of warmup iterations to perform before the actual inference. This parameter helps stabilize the model's performance by allowing it to adjust to the input data. The default value is 10.

r_erode

r_erode is an integer that specifies the size of the erosion kernel applied to the mask. Erosion helps refine the mask by shrinking the highlighted regions, which can improve the precision of the matting. The default value is 0, indicating no erosion.

r_dilate

r_dilate is an integer that specifies the size of the dilation kernel applied to the mask. Dilation expands the highlighted regions in the mask, which can help capture more of the object of interest. The default value is 0, indicating no dilation.

MatAnyone2 Output Parameters:

alpha_tensors

alpha_tensors is a list of tensors, each with the shape (1, 1, H, W), representing the alpha matte for each frame. The alpha matte is a crucial output that indicates the transparency level of each pixel in the frame, allowing for precise compositing of the foreground object onto a new background. This output is essential for achieving high-quality video matting results.

MatAnyone2 Usage Tips:

  • Ensure that the video frames (vframes) are preprocessed correctly and are in the expected format to achieve optimal matting results.
  • Experiment with the r_erode and r_dilate parameters to refine the mask and improve the accuracy of the matting, especially in complex scenes with intricate details.
  • Utilize the n_warmup parameter to stabilize the model's performance, particularly when working with challenging video sequences or when the initial results are not satisfactory.

MatAnyone2 Common Errors and Solutions:

ValueError: Expected 0 <= x < {length}, got {mask_frame}

  • Explanation: This error occurs when the mask_frame parameter is set to a value that is outside the range of available frames in the video.
  • Solution: Ensure that the mask_frame value is within the valid range, i.e., between 0 and the total number of frames minus one.

RuntimeError: CUDA out of memory

  • Explanation: This error indicates that the GPU does not have enough memory to process the video frames.
  • Solution: Reduce the resolution of the video frames or process the video in smaller batches to fit within the available GPU memory. Alternatively, consider upgrading the GPU or using a machine with more memory.

MatAnyone2 Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-MatAnyone
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.