logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
MODELS
Explore
All Models
LIBRARY
Generations
MODEL APIS
API Docs
API Keys
ACCOUNT
Usage

Seedance 2.0 Fast: Multimodal Video Generation | RunComfy

bytedance/seedance-v2/fast

Generate cinematic videos from text and media inputs with native audio, precise lip-sync, and smooth storytelling control—faster iteration for ads, film previz, and branded visual content.

Text prompt for the video (Chinese ~≤500 characters, English ~≤1000 words recommended).
Reference images for multimodal reference mode (0–9). Support jpeg、png、webp、bmp、tiff、gif.
Reference videos for multimodal reference mode (0–3). Support mp4、mov. Video duration must be between 2 and 15 seconds.
Reference audio for multimodal reference mode (0–3). Support wav、mp3. Audio duration must be between 2 and 15 seconds. Size should be less than 15MB.
Seedance 2.0 Fast default is adaptive (model picks closest ratio; actual ratio is returned on task query).
Integer seconds in [4, 15].
When true, the model outputs video with synchronized audio (speech, SFX, music).
Random seed for the video generation.
When `web_search` is included, the model may run an online search depending on the prompt (e.g. specific products, current weather), which can improve factual freshness but increases latency.
Idle
The rate is $0.16 per second.

Introduction To Seedance 2.0 Fast Video Creation

ByteDance's Seedance 2.0 Fast turns text and references into cinematic videos with native audio and millisecond lip-sync, prioritizing quicker generations than the Pro tier on the same multimodal workflow.
Ideal for: Rapid Creative Iteration | High-Conversion Video Ads | Shot-Accurate Film Previsualization | Brand-Consistent Multi-language Lip-Synced Narratives

ByteDance Seed / Seedance 2.0 Fast


Seedance 2.0 Fast is a speed-oriented multimodal text-to-video model from ByteDance Seed that turns scene descriptions and optional references into short cinematic clips. On RunComfy you drive generation with a prompt plus optional images (up to 9), videos (up to 3), and audio (up to 3) for multimodal reference mode, and you can set aspect ratio, duration, resolution, generate audio, seed, and optional tools (e.g. [{ "type": "web_search" }] to allow online search when the model chooses).


Highlights

  • Multimodal references: Up to 9 images, 3 reference videos (2–15 s each, mp4/mov), and 3 audio references (2–15 s, wav/mp3, under 15 MB each), together with a strong text prompt.
  • Flexible framing: Default adaptive ratio lets the model pick the closest format; fixed ratios are available for platform-specific delivery.
  • Audio-aware video: Toggle Generate audio for synchronized speech, SFX, and music with the clip.
  • Controllable length: Duration is an integer from 4 to 15 seconds (default 5).
  • Reproducibility: Set Seed when you need repeatable results while refining prompts or references.
  • Optional web search: Pass tools with type: web_search so the model can search the web when needed; check usage.tool_usage.web_search on the task query response for how many searches ran.

Related Playgrounds

video-background-removal/video-to-video

AI-powered tool for fast video-to-video backdrop swaps with pro-level precision.

one-to-all-animation/1.3b

Create identity-stable motions from photos using fast, alignment-free motion retargeting for designers and animators.

creatify/lipsync

Transform scripts or voices into dynamic, brand-tailored avatar videos fast.

pikascenes

Build a scene from 1–6 images and animate it into a video.

pika-2-2/image-to-video

AI effects for engaging social & entertainment clips.

seedvr2/upscale/video

Enhance blurry visuals instantly with fast, unified AI upscaling.

Frequently Asked Questions

What is Seedance 2.0 Fast on RunComfy?

Seedance 2.0 Fast is ByteDance Seed’s speed-oriented multimodal video model: you describe the scene in text and can add images, reference video, and reference audio so the model aligns motion, look, and sound. On RunComfy it is tuned for faster turnaround while still supporting cinematic short clips, optional native audio, lip-sync-friendly results, and camera control when you steer the shot in your prompt.

What resolution and aspect ratio can I choose for Seedance 2.0 Fast?

Resolution is 480p, 720p (default), or 1080p. Aspect ratio defaults to adaptive (the model picks the closest match; the task result shows the actual output ratio). You can also fix it to 16:9, 9:16, 4:3, 3:4, 1:1, or 21:9.

How does multimodal input work—what are the limits for prompts and reference media?

Only the prompt is required; references are optional but define multimodal reference mode when you add them. Prompt: about ≤500 Chinese characters or ≤1000 English words is recommended. Images: up to 9 (jpeg, png, webp, bmp, tiff, gif). Reference videos: up to 3 (mp4, mov), each 2–15 seconds. Reference audio: up to 3 (wav, mp3), each 2–15 seconds and under 15 MB. Clear prompts plus aligned references usually give the steadiest identity, style, and sync.

How long can outputs be, and how do I set duration?

Duration is an integer 4–15 seconds (default 5). Choose any whole-second value in that range per job.

Does Seedance 2.0 Fast generate audio and support lip-sync?

With Generate audio (generate_audio) set to true (the default), the model can output video with synchronized audio (dialogue, SFX, or music). Set it to false for silent video. Lip-sync quality depends on how explicitly you describe speech, framing, and timing in the prompt; reference audio can also guide rhythm or tone when you use multimodal references.

When should I pick Seedance 2.0 Fast instead of Seedance 2.0 Pro?

Choose Fast when you care most about shorter wait times and rapid iteration—same playground inputs (prompt, image_url, video_url, audio_url, aspect_ratio, duration, resolution, generate_audio, seed), still up to 1080p here and full multimodal reference limits. Choose Pro when you want the highest cinematic fidelity the provider offers for a shot. A/B test both on the same prompt and references to judge quality vs. speed for your pipeline.

What improved in the Seedance 2.x line versus Seedance 1.5 Pro?

Seedance 2.0 Fast focuses on short cinematic clips with rich multimodal references (many images plus optional video and audio), 4–15 s duration, adaptive or fixed aspect ratios, and built-in audio when generate_audio is on. Gains are use-case dependent; compare on your own prompts, characters, and reference packs rather than relying on a single benchmark.

How does Seedance 2.0 Fast compare to models like Wan 2.5 or Kling Video 2.6?

It depends on budget, latency, and the kind of motion you need. On RunComfy, Seedance 2.0 Fast emphasizes fast multimodal text-to-video with up to nine images, three reference videos, three audio references, 480p–1080p presets, and a generate audio toggle. Wan 2.5 and Kling 2.6 differ in pricing, limits, and strengths—run parallel tests on your typical briefs and reference sets.

How do I go from the Playground to the RunComfy API with Seedance 2.0 Fast?

Mirror the playground Input schema: prompt, image_url, video_url, audio_url, aspect_ratio, duration, resolution, generate_audio, and seed. Enforce the same prompt and media limits in your app, then authenticate with your API key and credits for batch or automated jobs.

Can I use Seedance 2.0 Fast outputs commercially?

Commercial use depends on ByteDance’s licensing for the model and RunComfy’s terms of service. Read the official model license and RunComfy docs, or email hi@runcomfy.com before using generated footage in paid campaigns, client work, or wide distribution.

Who gets the most value from Seedance 2.0 Fast on RunComfy?

Creators and teams who need quick multimodal video generation: social and ad concepts, storyboards and previs, branded shorts, and iterate-heavy workflows where speed matters as much as a single “hero” frame. Pair text with references when you need repeatable look, character, or audio-aware motion—then upgrade to Pro for final polish when the shot warrants it.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Video Models
  • Seedance 1.0 Pro Fast
  • Seedance 1.0
  • Seedance 1.5 Pro
  • Hailuo 02
  • Wan 2.2
  • Seedance Lite
  • View All Models →
Image Models
  • seedream 4.0
  • Nano Banana 2 Edit
  • Nano Banana Pro
  • Nano Banana 2
  • Seedream 5.0 Lite
  • nano banana
  • View All Models →
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Seedance 2.0 Fast Video Examples Showcase

Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...