Visit ComfyUI Online for ready-to-use ComfyUI environment
Specialized tool for merging models with different block structures, accommodating varying block numbers and image-to-video models.
The ModelMergeWAN2_1 node is a specialized tool designed for merging two models, particularly focusing on models with different block structures, such as the 1.3B and 14B models. This node is part of the advanced model merging category and is tailored to handle models with varying numbers of blocks, specifically 30 blocks for the 1.3B model and 40 blocks for the 14B model. Additionally, it accommodates image-to-video models that include an extra image embedding (img_emb). The primary purpose of this node is to facilitate the seamless integration of two models by allowing fine-tuned control over the merging process through adjustable parameters. This capability is particularly beneficial for AI artists and developers who wish to experiment with model blending to achieve unique outputs or enhance model performance by leveraging the strengths of different models.
This parameter represents the first model to be merged. It is a required input and serves as one of the two primary models involved in the merging process. The model should be compatible with the node's merging capabilities.
This parameter represents the second model to be merged. Like model1, it is a required input and is the counterpart to the first model. The merging process will blend the features of this model with those of model1.
This parameter controls the blending ratio for the patch embedding component of the models. It is a float value ranging from 0.0 to 1.0, with a default of 1.0. Adjusting this value influences how much of the patch embedding from model2 is incorporated into the merged model.
This parameter adjusts the blending ratio for the time embedding component. It functions similarly to patch_embedding., with a float range from 0.0 to 1.0 and a default of 1.0. It determines the extent to which the time embedding from model2 is used in the final model.
This parameter manages the blending ratio for the time projection component. It follows the same float range and default as the previous parameters, affecting the integration of the time projection from model2.
This parameter sets the blending ratio for the text embedding component. It allows for fine-tuning the influence of model2's text embedding in the merged model, with a float range from 0.0 to 1.0 and a default of 1.0.
This parameter is specific to image-to-video models and controls the blending ratio for the image embedding component. It shares the same float range and default as other parameters, determining the contribution of model2's image embedding.
model2 is merged with model1.This parameter adjusts the blending ratio for the head component of the models. It follows the same float range and default as other parameters, influencing the integration of the head from model2.
The output of the ModelMergeWAN2_1 node is a merged model, represented as MODEL. This output is the result of blending the two input models (model1 and model2) based on the specified parameters. The merged model combines the strengths and features of both input models, potentially leading to enhanced performance or unique characteristics.
text_embedding. can enhance text-related features in the merged model.blocks.0. to blocks.39.) to fine-tune the integration of individual blocks, allowing for precise control over the model's architecture and behavior.RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.