Create lifelike video motion fast with Seedance Pro for design pros
Wan 2.7 Reference to Video is built for controlled video generation from image, video, or audio-guided references, with emphasis on subject fidelity, motion continuity, and scene consistency. This reference-to-video task converts reference assets into new video outputs that preserve identity and composition while following explicit motion and scene instructions. Wan 2.7 Reference to Video is suited to character-led clips, branded localization, and instruction-based sequence creation where stable visual carryover matters.
Key capabilities:
Start Wan 2.7 Reference to Video by supplying a clear prompt plus either reference images, reference videos, or both, depending on whether you need appearance transfer, motion transfer, or multi-subject consistency. Describe the subject, action, camera behavior, environment, and what must remain unchanged. For Wan 2.7 Reference to Video, keep instructions concrete: define motion pacing, framing, shot continuity, and visual constraints. Use negative_prompt to suppress unwanted traits, choose the aspect ratio based on delivery format, and enable multi_shots only when the sequence should break into coordinated cuts instead of one continuous take.
Pro tips:
Note: If you need to modify an existing image, such as changing the background, lighting, or specific objects within a picture, use the Seedream 4.5 Edit model, which is optimized for instruction-based image manipulation.
Create lifelike video motion fast with Seedance Pro for design pros
Generate cinematic videos from text prompts with Wan 2.1.
Turn static visuals into smooth motion with Hailuo 2.3 for rapid, realistic video creation.
Generate cinematic clips faster with multimodal references, lip-sync, and camera control
AI-driven tool for seamless object separation and smooth video compositing.
Transform static visuals into cinematic motion with Kling O1's precise scene control and lifelike generation.
Wan 2.7 Reference to Video is an AI video generation mode that transforms reference media such as images, clips, or audio into new, coherent videos. The reference-to-video process allows the model to maintain subject identity, motion, and audio characteristics from the original reference, helping creators produce consistent and realistic results.
Compared to older versions like Wan 2.6, Wan 2.7 Reference to Video offers boundary frame control, extended durations, native audio referencing, and enhanced identity consistency. These improvements make the reference-to-video process more controllable and better suited for production-quality projects.
Wan 2.7 Reference to Video is ideal for content creators, studios, marketers, or developers who need consistent identity control in short clips. The reference-to-video mode helps with talking heads, localized marketing videos, reenactments, and character-based storytelling where fidelity and expressive motion control matter.
Wan 2.7 Reference to Video operates via Runcomfy’s AI playground on a credit-based model. New users receive complimentary credits for testing the reference-to-video generation, while ongoing use requires purchasing additional credits as specified in the Generation section of Runcomfy’s site.
Wan 2.7 Reference to Video supports a range of inputs, including still images, short video clips, and even audio tracks. In its reference-to-video mode, you can combine these references—up to five at once—to control voice, motion, and visual style within the output video.
Yes. Wan 2.7 Reference to Video is fully accessible through the Runcomfy web playground, which functions smoothly on both mobile and desktop browsers. The reference-to-video features are optimized to deliver responsive performance across platforms.
Videos generated through Wan 2.7 Reference to Video are produced in 1080p full HD resolution. The reference-to-video mode typically supports durations between 2 and 10 seconds, making it suitable for short films, promotional clips, and expressive content prototypes.
Yes, Wan 2.7 Reference to Video performs best when reference videos are clear, stable, and consistent. For smoother reference-to-video results, avoid inconsistent lighting, highly dynamic cuts, or blurry footage in the source material. Following optimized prompt labels also improves accuracy.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.





