logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
ComfyUI>Workflows>ComfyUI Img2Vid | Morphing Animation

ComfyUI Img2Vid | Morphing Animation

Workflow Name: RunComfy/Morphing-Animation
Workflow ID: 0000...1112
Developed by Titto13 and based on ipiv's work, this workflow includes AnimateDiff LCM, IPAdapter, QRCode ControlNet, and Custom Mask modules. Each component plays a vital role in enhancing dynamic image generation and smooth animation transitions, making Img2Vid an invaluable tool for animators.

This ComfyUI Img2Vid workflow is created by Titto13 based on the exceptional work of ipiv. This workflow focusing on dynamic image generation and adjustment, consists of AnimateDiff LCM, IPAdapter, QRCode ControlNet, and Custom Mask modules. Each of these modules plays a crucial role in the Img2Vid process, enhancing the quality of morphing animation.

Core Components of the Img2Vid workflow for morphing animation

1. AnimateDiff LCM Module:

Integrates the AnimateLCM model into the AnimateDiff setup to accelerate the rendering process. AnimateLCM speeds up video generation by reducing the number of inference steps required and improves result quality through decoupled consistency learning. This allows the use of models that typically do not produce high-quality results, making AnimateLCM an effective tool for creating detailed animations.

2. IPAdapter Module:

Utilizes the attention mask function of IPAdapter to achieve morphing between reference images. Users can generate dedicated attention masks for each image, ensuring smooth transitions in the final video.

3. QRCode ControlNet Module:

Uses a black-and-white video as the input for the ControlNet QRCode model, guiding the animation flow and enhancing the visual dynamics of the morphing sequence.

4. Mask Module:

Provides three preset masks and allows users to load custom masks. All these masks can be switched with a simple one-click operation to achieve various effects.

How to use the Img2Vid workflow to create morphing animations

1. Image Loading and Mask Application

  • Image Loading: Load images into the "Load White" and "Load Black" nodes. The workflow includes various image masks that users can select based on their needs.
  • Mask Processing: Masks can be selected by clicking "Action!" and custom masks can be uploaded and applied, enhancing flexibility.

2. Image Adjustment

  • Image Rotation、Cropping and Flipping: Adjust images using the "Rotate Mask" and "Flip Image" functions to achieve the desired effects.The "Fast Crop" function allows users to choose between center cropping or adding black borders to make images fit. The "Detail Crop" function enables cropping of specific details from images.(This feature gives you more control over your creations, so enable it if you need to!)

3. Parameter Adjustment

  • AnimateDiff - Motion Scale: Adjusting this parameter changes the animation's fluidity. Increasing the value adds more movement but may reduce detail quality. The recommended range is 1.000-1.300, with experimentation encouraged.
  • QRCode ControlNet - Strength and End Percent: These parameters control the animation's intensity and transition effect. Generally, adjust "Strength" between 0.4 and 1.0 and "End Percent" between 0.350 and 0.800.
  • Mask - Force Rate: Set to "0" for initial speed or "12" for accelerated and doubled cycles. Adjust this value based on animation length and effect needs.
  • IPAdapter - Preset: It is recommended to use the "VIT-G" preset for more stable results. For results closer to the original images, switch to the “PLUS” preset and set “weight_type” to “ease in-out.”

For more information and to view the original work, please visit the Civitai page of the author Titto13.

Want More ComfyUI Workflows?

Wan 2.1 Video Restyle | Consistent Video Style Transform

Transform your video style by applying the restyled first frame using Wan 2.1 video restyle workflow.

Mochi Edit UnSampling | Video-to-Video

Mochi Edit: Modify Videos Using Text-Based Prompts and Unsampling.

FLUX.1 Dev LoRA Inference | AI Toolkit ComfyUI

Run your AI Toolkit-trained FLUX.1 Dev LoRA in ComfyUI with training-matched behavior using a single RCFluxDev custom node.

Pose Control LipSync S2V | Expressive Video Generator

Turn images into talking, moving characters with pose and audio control.

BAGEL AI | T2I + I2T + I2I

Multimodal understanding and generation with open-source AI.

Flux TTP Upscale | 4K Face Restore

Repair distorted faces and upscale images to 4K resolution.

Consistent & Realistic Characters

Consistent & Realistic Characters

Create consistent and realistic characters with precise control over facial features, poses, and compositions.

Product Relighting | Magnific.AI Relight Alternative

Elevate your product photography effortlessly, a top alternative to Magnific.AI Relight.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.