This workflow brings WAN 2.2 Palingenesis into a streamlined ComfyUI graph for both Image-to-Video and Text-to-Video generation. It is designed for creators who want cinematic motion, strong prompt adherence, and consistent visual coherence with minimal setup friction. Use WAN 2.2 Palingenesis to turn stills into dynamic sequences or to synthesize videos directly from text, then finish with optional RIFE interpolation for ultra-smooth playback.
Two independent paths are included. The I2V path accepts a single image, encodes it into video latents, and performs a two-stage denoising pass to stabilize motion. The T2V path encodes text with a T5-based encoder and uses a similar high-then-low two-stage sampler for detail and fidelity. Both paths decode to frames and can export straight to MP4, with an optional RIFE stage for extra temporal smoothness. Throughout, WAN 2.2 Palingenesis LoRA support enables fast style and character conditioning without retraining.
The graph contains two main groups that can run independently. Each path follows the same high-level logic: encode inputs, denoise in two stages using WAN 2.2 Palingenesis weights, decode to frames, optionally interpolate with RIFE, and package the result as an MP4.
Start by loading an image in LoadImage
and set your target Width
, Height
, and Frames
. The image is resized in ImageResizeKJv2
and encoded into video-aware latents by WanVideoImageToVideoEncode
so the model can infer motion cues from the still. Enter your descriptive prompt in WanVideoTextEncode
to steer content and camera behavior; LoRA styles can be added via WanVideoLoraSelectMulti
when you want a consistent look or character. A first pass WanVideoSampler
runs with a “high” I2V weight to establish composition and initial motion, and a second WanVideoSampler
continues from that latent with a “low” I2V weight to refine details and stability. After decoding, you can export frames directly to MP4 or route them through the RIFE VFI
node for smoother motion before final packaging.
Provide your prompt in WanVideoTextEncode
(T2V) and set your target video size and frame count. The workflow constructs empty image conditioning for pure text control and sends text embeddings into a two-stage sampling stack. The first WanVideoSampler
with a “high” T2V weight locks in scene layout, subject, and motion trajectory; a second WanVideoSampler
with a “low” T2V weight polishes textures, edges, and temporal consistency. The decoded frames are optionally passed through RIFE VFI
for additional smoothness, then VHS_VideoCombine
writes the final MP4. Use WanVideoLoraSelectMulti
to mix in WAN 2.2 Palingenesis LoRAs when you need style transfer or character fidelity.
Torch compile and block-swap settings are pre-wired to reduce VRAM pressure and speed up inference on long sequences. Two helper nodes are included to purge VRAM between stages when you batch runs back-to-back. Final videos are created with Video Helper Suite’s combine node, and a convenience saver also writes a representative preview frame. The entire layout is tuned so that you can iterate prompts quickly while keeping the heavier decoding and interpolation steps optional.
WanVideoTextEncode
(#149)
This node turns your prompt into text embeddings for WAN 2.2 Palingenesis. Strong, specific language helps the model resolve subject, setting, and camera motion; use brief negatives to suppress unwanted artifacts. If you apply LoRAs, align your prompt with the adapter’s intent for best results.
WanVideoImageToVideoEncode
(#89)
Converts the resized still into video-aware latents by considering the target width
, height
, and num_frames
. For I2V, this is where content from the source image is injected. You can fine-tune how strongly the image constrains motion by adjusting the strength controls; enable tiled VAE when working at larger resolutions.
WanVideoSampler
(#27)
First-stage denoiser for I2V using a “high” Palingenesis I2V weight. It establishes motion, structure, and coarse details. Tune steps
and cfg
to trade off sharpness vs creativity; coordinate the stage boundary with the second sampler so end_step
here lines up with start_step
in the next stage.
WanVideoSampler
(#140)
First-stage denoiser for T2V using a “high” Palingenesis T2V weight. It lays down scene composition and motion while following your prompt. Use the schedule node that feeds cfg
to modulate prompt adherence over time, then pass control to the second-stage sampler to refine.
WanVideoLoraSelectMulti
(#129)
Adds one or more WAN 2.2 Palingenesis LoRAs for style, subject, or motion priors. Start with a single adapter and increase its strength until the effect is visible but not overpowering. When stacking LoRAs, keep individual strengths moderate to avoid conflicting signals.
RIFE VFI
(#117)
Optional interpolation to boost smoothness and perceived frame rate. Increase the interpolation multiplier
to create extra in-between frames; use the fast option for previews and the quality path for final renders. Interpolation works best when motion is already coherent, so fix flicker at the generation stage before relying on RIFE.
frames
first and only then increase resolution if VRAM allows.cfg
, steps
, or LoRA strength) to iterate with intention.This ComfyUI graph gives you a practical, production-ready way to harness WAN 2.2 Palingenesis for both I2V and T2V, from first prompt to final MP4. For model references and updates, see the WAN 2.2 Palingenesis repositories on Hugging Face: eddy1111111/WAN22.XX_Palingenesis and befox/WAN22.XX_Palingenesis-GGUF, and the supporting components kijai/ComfyUI-WanVideoWrapper and kosinkadink/ComfyUI-VideoHelperSuite.
This workflow implements and builds upon the following works and resources. We gratefully acknowledge WAN and @AiVerse for WAN 2.2 Palingenesis for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.
Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.