Creatify Aurora: Realistic Image-to-Video & Lip-Sync Avatar Creation on playground and API | RunComfy

creatify/aurora

Transform a single image and audio clip into studio-quality talking avatar videos with precise lip-sync, expressive motion, and seamless browser or API generation for ads, learning, and localization.

The URL of the image file to be used for video generation.
0:00
0:00
The URL of the audio file to be used for video generation.
A text prompt to guide the video generation process.
Guidance scale to be used for text prompt adherence. Default value: 1
Guidance scale to be used for audio adherence. Default value: 2
The resolution of the generated video.
Idle
The rate is $0.10 per second for 480p, and $0.14 per second for 720p.

Introduction to Creatify Aurora AI Video Generator

Creatify.ai's Creatify Aurora turns a single image and an audio clip into studio-grade speaking or singing avatar video at $0.10 per video second for 480p and $0.14 per video second for 720p, for state-of-the-art audio-to-video avatar generation. Trading studio shoots, multi-angle capture, and manual keyframing for zero-shot, audio-driven performance rendering with precise lip-sync, expressive gestures, and minute-long consistency, Creatify Aurora eliminates casting and reshoots, built for marketing teams, localization studios, and e-learning providers. For developers, Creatify Aurora on RunComfy can be used both in the browser and via an HTTP API, so you don’t need to host or scale the model yourself.
Ideal for: High-Conversion Avatar Video Ads | Multilingual Video Localization | Virtual Presenter Production

Examples Created with Creatify Aurora

Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...

Model Overview


  • Provider: Creatify
  • Task: image-to-video, audio-to-video
  • Max Resolution/Duration: Up to 720p; duration typically up to 60s per request (API dependent)
  • Summary: Creatify Aurora transforms a single image and an audio clip into studio-quality talking-avatar video with precise lip-sync and expressive motion. It preserves identity and visual detail while following long-form speech or singing with natural gestures, blinking, and breathing. For technical artists and developers, Creatify Aurora emphasizes motion consistency, high realism, and robust API-driven generation.

Key Capabilities


Audio-driven realism with precise lip-sync and gestures

  • Generates avatar videos that tightly follow the input audio (speech or singing), including mouth shapes, timing, and prosody.
  • Produces nuanced facial expressions, eye blinks, breathing, and full-body gestures that enhance realism and viewer engagement.

Long-form consistency over multi-minute audio

  • Maintains character identity and appearance across extended durations.
  • Reduces drift and artifacting so multi-paragraph narration, podcasts, or songs remain coherent end-to-end.

Single-image, zero-shot setup with studio-quality output

  • Requires only one reference image to create a speaking or singing avatar.
  • Preserves the subject’s likeness across different angles and scenarios without multi-view inputs.

Input Parameters


Core Inputs


ParameterTypeDefault/RangeDescription
image_urlstring (URL)Publicly accessible URL to the reference image used to create the avatar.
audio_urlstring (URL)Publicly accessible URL to the audio (speech or song) that drives lip-sync and motion.
promptstring""Optional text to guide high-level style or mood of the video.

Guidance Controls


ParameterTypeDefault/RangeDescription
guidance_scalefloatDefault: 1Controls adherence to the text prompt; higher values increase prompt influence.
audio_guidance_scalefloatDefault: 2Controls adherence to the audio; higher values increase sync strength and motion following the audio.

Output Settings


ParameterTypeDefault/RangeDescription
resolutionstring480p, 720p (Default: 720p)Output video resolution. Choose 480p for speed/size or 720p for higher detail.

How Creatify Aurora compares to other models


  • Vs aurora_v1_fast Generation: Compared to aurora_v1_fast, Creatify Aurora delivers higher visual fidelity and more nuanced expressions with stronger identity preservation. Key improvements include more stable long-form motion and finer facial details. Ideal Use Case: choose this when output quality and realism matter most.
  • Vs general video-generation models: Compared to generic text-to-video or image-sequence methods, Creatify Aurora delivers audio-synchronized lip movements and gesture timing that track speech and music directly. Key improvements include precise audio alignment and reduced temporal artifacts. Ideal Use Case: audio-driven presenter videos, podcasts, and singing avatars.
  • Vs image-generation models (Flux 2, Seedream, etc.): Compared to still-image models, Creatify Aurora delivers motion, temporal coherence, and expressive performance rather than static visuals. Key improvements include consistent identity across frames and lifelike movement. Ideal Use Case: any scenario requiring a talking or performing avatar from a single image.

API Integration


Developers can integrate Creatify Aurora via the RunComfy API using standard HTTP requests for submitting image and audio URLs, polling job status, and retrieving outputs. The workflow is designed for quick adoption into pipelines or web apps, with straightforward parameters for guidance and resolution.


Note: API Endpoint for Creatify Aurora


Official resources and licensing


  • Official Website: https://creatify.ai/introducing-aurora
  • Documentation: https://docs.creatify.ai/api-reference/aurora/post-aurora
  • License: Proprietary. Commercial use may require a separate agreement with Creatify.

Related Playgrounds

Frequently Asked Questions

What are the maximum resolution and duration limits supported by Creatify Aurora for image-to-video or audio-to-video generation?

Creatify Aurora currently supports up to approximately 1080p resolution output for both image-to-video and audio-to-video modes. The duration limit is tied to the input audio length, generally capped around 60 seconds per generation request when using the Creatify API. These limits balance generation speed, quality, and credit consumption.

How many reference images or inputs can I use with Creatify Aurora when creating an image-to-video avatar?

Creatify Aurora operates in a zero-shot mode, requiring only one reference image and one audio input clip. Unlike diffusion-based ControlNet approaches, it doesn’t accept multi-view or multi-frame references for improved efficiency in image-to-video or audio-to-video generation.

How can I move my project from the RunComfy Playground testing phase to full production with Creatify Aurora?

You can prototype directly in the RunComfy Playground with Creatify Aurora and its image-to-video or audio-to-video options. For production, you’ll need to integrate through RunComfy’s REST API using your account key. The same model IDs and parameters available in the playground (like model_version: 'aurora_v1' or 'aurora_v1_fast') are supported for scalable automation and CI/CD workflows.

What makes Creatify Aurora’s image-to-video generation stand out from other avatar or video synthesis models?

Creatify Aurora produces full-body, emotionally expressive avatars directly from a single image and an audio clip. Its multimodal architecture provides superior temporal coherence and body gesture realism compared to earlier image-to-video and audio-to-video systems, which often exhibit flicker or motion inconsistencies.

How does Creatify Aurora ensure accurate lip-sync and emotional expression for audio-to-video outputs?

Creatify Aurora employs a diffusion transformer backbone with audio-driven temporal alignment, enabling precise lip-sync, breathing, blinking, and nuanced gestures. This makes its audio-to-video generation notably consistent across long-form inputs like podcast narrations or songs.

What typical use cases best demonstrate Creatify Aurora’s image-to-video and audio-to-video capabilities?

Creatify Aurora excels in avatar-based video storytelling, brand spokesperson videos, and singing performer animations. Its image-to-video and audio-to-video processing handles tasks such as marketing videos, e-learning avatars, and multilingual dubbing where timing and character consistency are crucial.

Does Creatify Aurora maintain character consistency across longer audio-to-video segments?

Yes. One of Creatify Aurora’s key advancements is temporal coherence across extended durations. In audio-to-video workflows, even multi-minute audio inputs yield stable facial identity, gaze direction, and emotional continuity, outperforming many competing models in sustained performance.

How does Creatify Aurora differ from previous Creatify Lab versions and competitors?

Compared to early Creatify models, Aurora v1 integrates enhanced cross-modal fusion and has improved lighting and gesture realism for both image-to-video and audio-to-video outputs. Unlike many other systems that rely on static 2D talking heads, Aurora delivers expressive full-body movement with industry-level video realism.

Can I use Creatify Aurora for commercial projects?

Commercial use of Creatify Aurora outputs is generally permitted under Creatify.ai’s service terms, covering both image-to-video and audio-to-video results. However, developers should review the official licensing details at Creatify.ai to confirm usage rights, especially for branded avatars or redistribution.

Is there a fast vs standard version of Creatify Aurora, and how do they differ?

Yes. The 'aurora_v1' variant delivers higher visual fidelity in image-to-video and audio-to-video creation, while 'aurora_v1_fast' trades off some fine detail for faster render times and lower credit costs. Both models maintain temporal consistency and realistic motion but vary in generation latency and credit pricing.

RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.