ComfyUI>Workflows>Wan 2.2 Prompt Relay | Scene-Controlled Video Maker

Wan 2.2 Prompt Relay | Scene-Controlled Video Maker

Workflow Name: RunComfy/Wan-2.2-Prompt-Relay
Workflow ID: 0000...1411
This workflow helps you build seamless, multi-section AI videos by routing different scene directions through a single generation timeline. It lets you manage temporal transitions, ensuring each part of the video follows its own creative prompt. Ideal for video creators seeking detailed scene control and consistent flow across segments. You can control prompts dynamically at inference time without changing the base model setup. Perfect for testing and refining prompt-based scene transitions with the Wan 2.2 system.

ComfyUI Wan 2.2 Prompt Relay Workflow

Wan 2.2 Prompt Relay in ComfyUI | Temporal Scene Control Workflow
Want to run this workflow?
  • Fully operational workflows
  • No missing nodes or models
  • No manual setups required
  • Features stunning visuals

ComfyUI Wan 2.2 Prompt Relay Examples

Wan 2.2 Prompt Relay: timeline‑controlled image to video in ComfyUI#

This workflow brings segment‑level scene direction to Wan 2.2 image‑to‑video. It uses Wan 2.2 for generation and the Prompt Relay method to route different prompts across a single timeline, so you can hand off control from one scene to the next without cutting the render. The result is a smooth multi‑event video where each segment follows its own prompt while object identity and style stay consistent.

Wan 2.2 Prompt Relay is an inference‑time routing technique, not a standalone model or LoRA. The graph is designed for RunComfy cloud and includes a two‑stage sampler chain plus optional RIFE frame interpolation. Use it when you want tight temporal scene control with minimal setup: provide a start image, define a global prompt and per‑segment prompts, set video length, and render.

Key models in Comfyui Wan 2.2 Prompt Relay workflow#

How to use Comfyui Wan 2.2 Prompt Relay workflow#

The workflow routes text prompts over time, generates a latent video from a start image, then refines and decodes frames before optional interpolation and encoding. It is organized into a few clear groups that cooperate to produce the final MP4.

  • Step1 - Load models This section initializes Wan 2.2, the text encoder, and the VAE. The high‑noise and low‑noise Wan models are both prepared so the pipeline can first establish motion, then enhance detail. If a LoRA is present it is applied to the base model before sampling. You do not need to change anything here unless you want to swap checkpoints.
  • Step2 - Upload start_image Import a single reference image that defines composition, subject identity, and lighting for the first frame using LoadImage (#85). The start image anchors the look of the video and helps maintain continuity across segments. Use a clean, on‑model reference for best results. Replace it whenever you want a different subject or layout.
  • Step3 - Video size & length Set the target resolution and total frame count in the latent video initializer (EmptyHunyuanLatentVideo (#121)) and keep it consistent with your segment plan. The sum of your segment lengths should equal the total frames. Match the frame rate you intend to export with the Prompt Relay settings and the video writer later in the graph.
  • Lightx2v + i2v The core render path uses a two‑stage sampler chain. Stage one with the high‑noise model establishes motion and scene transitions. Stage two with the low‑noise model refines detail and texture while preserving the motion path from stage one. This combination is what makes Wan 2.2 Prompt Relay both controllable and stable for scene‑to‑scene handoffs.
  • Prompt routing Enter a strong global_prompt that applies to the whole clip in PromptRelayEncodeTimeline (#117). Then define segment prompts either as JSON timeline data or as a pipe‑separated list. Prompt Relay encodes per‑frame conditioning that changes only at segment boundaries, optionally easing transitions for natural handoffs. The node feeds Wan’s conditioning and ensures each segment follows its intended direction.
  • Sampling and decoding The pipeline passes through WanImageToVideo (#79), then a coarse KSamplerAdvanced (#73) followed by a fine KSamplerAdvanced (#83). Frames are decoded with VAEDecode (#74) and written to video with VHS_VideoCombine (#108). Optionally, use RIFE VFI (#131) before a second VHS_VideoCombine (#132) if you want smoother motion or a higher output frame rate.

Key nodes in Comfyui Wan 2.2 Prompt Relay workflow#

  • PromptRelayEncodeTimeline (#117) Central to Wan 2.2 Prompt Relay, this node transforms your global_prompt and per‑segment prompts into a time‑aware positive conditioning stream. You can author segments in the timeline_data JSON or in local_prompts using a pipe syntax. Use max_frames to match the video length, choose time_units that align with your plan, and adjust epsilon to soften or harden prompt handoffs between segments. Keep fps consistent with your final export.
  • WanImageToVideo (#79) Converts the start image plus conditioning into an initial latent timeline for Wan 2.2. Connect your start reference to start_image and keep width, height, and length aligned with the latent initializer. Negative conditioning in this graph is intentionally zeroed to reduce over‑constraint and maintain stable identity; introduce an explicit negative prompt only if you see recurring artifacts you want to suppress.
  • KSamplerAdvanced (#73) First pass sampler that emphasizes motion and layout. It works with the high‑noise Wan model configured via ModelSamplingSD3 to explore trajectory while respecting Prompt Relay conditioning. Tune steps and cfg for the strength of guidance, and keep a fixed noise_seed when you want reproducible motion across editing iterations.
  • KSamplerAdvanced (#83) Second pass sampler that enhances detail and temporal consistency using the low‑noise Wan model. It refines texture, edges, and micro‑motion without fighting the coarse trajectory established by the first pass. If you increase fidelity here, consider balancing guidance to avoid over‑sharpening that can destabilize motion.
  • EmptyHunyuanLatentVideo (#121) Creates the blank latent video that defines spatial resolution, frame budget, and batch size. Set total frames to the sum of all segment lengths so Prompt Relay can map prompts cleanly. Changing resolution affects memory and the look of motion cadence, so scale thoughtfully.
  • VHS_VideoCombine (#108, #132) Encodes frames to MP4. Match frame_rate to the Prompt Relay fps when you are not using interpolation. If you do use RIFE VFI, set the writer’s frame rate to the new effective fps. Adjust crf for the tradeoff between size and quality.

Optional extras#

  • Write the global_prompt to lock tone, camera language, and quality tags, then keep segment prompts short and action‑focused.
  • Ensure the total of your segment lengths equals the video length to avoid prompt misalignment.
  • Keep seeds fixed while iterating on prompts, then randomize seeds only when you want a fresh take.
  • Use taller or wider start images to suggest aspect preference, but always set explicit width and height for predictability.
  • If you see identity drift across segments, strengthen the global_prompt with salient object descriptors and simplify local prompts.

Resources to explore the components used here:

Acknowledgements#

This workflow implements and builds upon the following works and resources. We gratefully acknowledge kijai for the ComfyUI-PromptRelay node, gordonchen19 for the Prompt-Relay project, and Comfy-Org for the Wan_2.2_ComfyUI_Repackaged models for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.

Resources#

Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.

RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.