logo
RunComfy
ComfyUIPlaygroundPricing
discord logo
ComfyUI>Workflows>WAN 2.2 Palingenesis | AI Video Generator

WAN 2.2 Palingenesis | AI Video Generator

Workflow Name: WAN 2.2 Palingenesis
Workflow ID: 0000...1296
With the next-gen Palingenesis model, you can swiftly turn still images or written prompts into vivid, story-driven motion clips. This workflow gives you tighter control over motion consistency, lighting, and fidelity. Generate expressive and cinematic scenes that retain character detail across frames. Ideal for concept artists, storytellers, and animators seeking seamless visual continuity. Elevate your creative flow with powerful image-to-video and text-to-video tools designed for precision and artistic quality.

WAN 2.2 Palingenesis I2V + T2V Workflow for ComfyUI

This workflow brings WAN 2.2 Palingenesis into a streamlined ComfyUI graph for both Image-to-Video and Text-to-Video generation. It is designed for creators who want cinematic motion, strong prompt adherence, and consistent visual coherence with minimal setup friction. Use WAN 2.2 Palingenesis to turn stills into dynamic sequences or to synthesize videos directly from text, then finish with optional RIFE interpolation for ultra-smooth playback.

Two independent paths are included. The I2V path accepts a single image, encodes it into video latents, and performs a two-stage denoising pass to stabilize motion. The T2V path encodes text with a T5-based encoder and uses a similar high-then-low two-stage sampler for detail and fidelity. Both paths decode to frames and can export straight to MP4, with an optional RIFE stage for extra temporal smoothness. Throughout, WAN 2.2 Palingenesis LoRA support enables fast style and character conditioning without retraining.

Key models in Comfyui WAN 2.2 Palingenesis workflow

  • WAN22.XX Palingenesis (I2V/T2V family). The primary video generation weights used in two flavors per task: “high” to establish structure and motion, and “low” to refine and stabilize later steps. Model cards and community builds: eddy1111111/WAN22.XX_Palingenesis and befox/WAN22.XX_Palingenesis-GGUF.
  • Wan 2.1 VAE. Variational autoencoder used for encoding/decoding video latents to images. It preserves detail while keeping the latent space compact. See the weights included alongside the Palingenesis models in the repository above.
  • UMT5-XXL Text Encoder. A multilingual T5-family encoder that converts prompts into conditioning for Text-to-Video. Reference implementation and model family: google/umt5-xxl.
  • RIFE (Real-Time Intermediate Flow Estimation). Optional neural frame interpolation to increase temporal smoothness and perceived frame rate without re-generating content. Official repository: hzwer/Practical-RIFE.

How to use Comfyui WAN 2.2 Palingenesis workflow

The graph contains two main groups that can run independently. Each path follows the same high-level logic: encode inputs, denoise in two stages using WAN 2.2 Palingenesis weights, decode to frames, optionally interpolate with RIFE, and package the result as an MP4.

I2V group

Start by loading an image in LoadImage and set your target Width, Height, and Frames. The image is resized in ImageResizeKJv2 and encoded into video-aware latents by WanVideoImageToVideoEncode so the model can infer motion cues from the still. Enter your descriptive prompt in WanVideoTextEncode to steer content and camera behavior; LoRA styles can be added via WanVideoLoraSelectMulti when you want a consistent look or character. A first pass WanVideoSampler runs with a “high” I2V weight to establish composition and initial motion, and a second WanVideoSampler continues from that latent with a “low” I2V weight to refine details and stability. After decoding, you can export frames directly to MP4 or route them through the RIFE VFI node for smoother motion before final packaging.

T2V group

Provide your prompt in WanVideoTextEncode (T2V) and set your target video size and frame count. The workflow constructs empty image conditioning for pure text control and sends text embeddings into a two-stage sampling stack. The first WanVideoSampler with a “high” T2V weight locks in scene layout, subject, and motion trajectory; a second WanVideoSampler with a “low” T2V weight polishes textures, edges, and temporal consistency. The decoded frames are optionally passed through RIFE VFI for additional smoothness, then VHS_VideoCombine writes the final MP4. Use WanVideoLoraSelectMulti to mix in WAN 2.2 Palingenesis LoRAs when you need style transfer or character fidelity.

Performance and output

Torch compile and block-swap settings are pre-wired to reduce VRAM pressure and speed up inference on long sequences. Two helper nodes are included to purge VRAM between stages when you batch runs back-to-back. Final videos are created with Video Helper Suite’s combine node, and a convenience saver also writes a representative preview frame. The entire layout is tuned so that you can iterate prompts quickly while keeping the heavier decoding and interpolation steps optional.

Key nodes in Comfyui WAN 2.2 Palingenesis workflow

  • WanVideoTextEncode (#149) This node turns your prompt into text embeddings for WAN 2.2 Palingenesis. Strong, specific language helps the model resolve subject, setting, and camera motion; use brief negatives to suppress unwanted artifacts. If you apply LoRAs, align your prompt with the adapter’s intent for best results.

  • WanVideoImageToVideoEncode (#89) Converts the resized still into video-aware latents by considering the target width, height, and num_frames. For I2V, this is where content from the source image is injected. You can fine-tune how strongly the image constrains motion by adjusting the strength controls; enable tiled VAE when working at larger resolutions.

  • WanVideoSampler (#27) First-stage denoiser for I2V using a “high” Palingenesis I2V weight. It establishes motion, structure, and coarse details. Tune steps and cfg to trade off sharpness vs creativity; coordinate the stage boundary with the second sampler so end_step here lines up with start_step in the next stage.

  • WanVideoSampler (#140) First-stage denoiser for T2V using a “high” Palingenesis T2V weight. It lays down scene composition and motion while following your prompt. Use the schedule node that feeds cfg to modulate prompt adherence over time, then pass control to the second-stage sampler to refine.

  • WanVideoLoraSelectMulti (#129) Adds one or more WAN 2.2 Palingenesis LoRAs for style, subject, or motion priors. Start with a single adapter and increase its strength until the effect is visible but not overpowering. When stacking LoRAs, keep individual strengths moderate to avoid conflicting signals.

  • RIFE VFI (#117) Optional interpolation to boost smoothness and perceived frame rate. Increase the interpolation multiplier to create extra in-between frames; use the fast option for previews and the quality path for final renders. Interpolation works best when motion is already coherent, so fix flicker at the generation stage before relying on RIFE.

Optional extras

  • Keep video dimensions divisible by common tile sizes to avoid artifacting and to maximize throughput.
  • For long clips, raise frames first and only then increase resolution if VRAM allows.
  • If motion drifts from the source image in I2V, increase the image conditioning strength or reduce prompt aggressiveness.
  • Use seeds to reproduce takes you like, then vary only one control at a time (cfg, steps, or LoRA strength) to iterate with intention.
  • Prefer the “high” weights to establish structure and the “low” weights to refine; the split-step control ensures both stages share a single timeline.

This ComfyUI graph gives you a practical, production-ready way to harness WAN 2.2 Palingenesis for both I2V and T2V, from first prompt to final MP4. For model references and updates, see the WAN 2.2 Palingenesis repositories on Hugging Face: eddy1111111/WAN22.XX_Palingenesis and befox/WAN22.XX_Palingenesis-GGUF, and the supporting components kijai/ComfyUI-WanVideoWrapper and kosinkadink/ComfyUI-VideoHelperSuite.

Acknowledgements

This workflow implements and builds upon the following works and resources. We gratefully acknowledge WAN and @AiVerse for WAN 2.2 Palingenesis for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.

Resources

  • WAN/2.2 Palingenesis
    • Docs / Release Notes: YouTube @Ai Verse

Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.

Want More ComfyUI Workflows?

Wan 2.2 + Lightx2v V2 | Ultra Fast T2V

Dual Light LoRA setup reduces steps to 8, 4x faster.

Wan 2.2 | Open-Source Video Gen Leader

Available now! Better precision + smoother motion.

Wan 2.2 FLF2V | First-Last Frame Video Generation

Generate smooth videos from a start and end frame using Wan 2.2 FLF2V.

Wan 2.2 Lightning T2V I2V | 4-Step Ultra Fast

Wan 2.2 now 20x faster! T2V + I2V in 4 steps.

Wan 2.2 Low Vram | Kijai Wrapper

Low VRAM. No longer waiting. Kijai wrapper included.

FLUX Controlnet Inpainting

Enhance realism by using ControlNet to guide FLUX.1-dev.

Mochi Edit UnSampling | Video-to-Video

Mochi Edit: Modify Videos Using Text-Based Prompts and Unsampling.

Flux Consistent Characters | Input Image

Flux Consistent Characters | Input Image

Create consistent characters and ensure they look uniform using your images.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.