logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
MODELS
Explore
All Models
LIBRARY
Generations
MODEL APIS
API Docs
API Keys
ACCOUNT
Usage

HappyHorse 1.0 I2V: #1 Arena-Ranked Image-to-Video AI Model | RunComfy

happyhorse/happyhorse-1-0/image-to-video

HappyHorse 1.0 I2V is live on RunComfy via Alibaba. Animate any uploaded photo into 3–15s 720P or 1080P video — the model keeps faces, products, and composition stable while adding cinematic motion.

Source image to animate (first frame). Formats: JPEG, JPG, PNG, or WEBP. Min 300px per side; aspect between 1:2.5 and 2.5:1; max 10MB.
Describe the motion, camera, lighting, and atmosphere. Up to 5000 non-Chinese characters or 2500 Chinese characters (longer input is truncated).
Output video resolution. HappyHorse 1.0 I2V supports 720P or 1080P.
Output video duration in seconds. Allowed values: 3–15.
Optional seed for reproducible generations. Use 0 to let the provider randomize.
Idle
$0.15 per second for 720P and $0.28 per second for 1080P.

Introduction To HappyHorse 1.0 Image-to-Video

HappyHorse 1.0 I2V is now available on RunComfy through Alibaba. Built on the #1 Arena-ranked HappyHorse 1.0 unified Transformer (Elo 1392 on the Artificial Analysis Image-to-Video leaderboard), HappyHorse 1.0 I2V animates a single source image into native 1080p video with physics-accurate motion, identity-preserving faces, and stable composition. Users can drive scene action with a natural-language prompt while HappyHorse 1.0 I2V keeps subject color, lighting, and product details true to the original frame.
Ideal for: product reveal clips | portrait animation | character motion shots | cinematic ad teasers | social media content

HappyHorse 1.0 I2V on X: News and Updates

HappyHorse 1.0 I2V on YouTube: Demos and Reviews

YouTube preview
YouTube preview

HappyHorse 1.0 I2V Image-to-Video#


HappyHorse 1.0 I2V on RunComfy uses Alibaba's async video-synthesis API with the happyhorse-1.0-i2v model. You upload a source image, write a motion-focused prompt, and the model renders a coherent short clip while preserving subject identity, color, and composition from the original frame.


Why it matters: HappyHorse 1.0 I2V tops the Artificial Analysis Image-to-Video Arena with an Elo of 1392, ahead of Seedance 2.0 and other commercial systems in blind human-preference voting. Powered by a 15B-parameter unified Transformer with DMD-2 distillation, the model delivers 1080p output at competitive speed without sacrificing facial fidelity, product geometry, or scene continuity.


Output format: video / resolution tier: 720P or 1080P / duration: 3–15 seconds / source: a single still image / aspect ratio: follows the first-frame image (no separate ratio parameter; unlike text-to-video)


Parameters#


ParameterRequiredTypeDefaultRange / OptionsDescription
image_url*Yesstring—JPEG, JPG, PNG, WEBP; min 300px sides; 1:2.5–2.5:1; max 10MBFirst-frame image the model animates.
prompt*Yesstring—max 5000 non-CJK or 2500 CJKMotion, camera, lighting, and mood (aligns with provider truncation rules).
resolutionNostring1080P720P, 1080POutput video resolution tier.
durationNointeger53–15Output video duration in seconds.
seedNointeger00 to 2147483647Optional random seed. Use 0 to let the provider choose one automatically.
watermarkNobooleantruetrue, false“Happy Horse” mark at bottom-right when true (provider default).

Prompt Tips#


  • Lead with motion verbs: drift, dolly in, orbit, tilt up, reveal, push, blink, breathe.
  • Tell the model what must stay fixed — identity, packaging, layout, background geometry.
  • Add lighting evolution (soft sun moving across the face, rim light intensifying) for a more cinematic result.
  • Keep the action to one clear visual beat per clip; single-intent shots render most cleanly.
  • Reuse the same seed when comparing prompt variants.

Notes#


  • This template is image-to-video only; for prompt-only generation use the HappyHorse 1.0 text-to-video template.
  • Output aspect ratio follows the source image proportions.
  • Duration outside 3–15 seconds is not exposed in this template.

Related Models

hailuo-02/pro/text-to-video

Generate sharp HD videos from text with Minimax Hailuo 02 Pro.

ltx-2-19b/video-to-video/lora

Efficient video transformation with cinematic motion and design precision.

wan-2-2/fun-control

First-frame restyle locks cinematic look across full AI video.

pikaframes

Animate between two images with smooth keyframe transitions using Pikaframes.

infinite-talk/image-to-video

Create photo-based, speech-aligned videos with natural motion

pixverse/v5.5/effects

Transform stills into narrative clips with synced audio and fluid camera motion.

Frequently Asked Questions

What is HappyHorse 1.0 I2V?

HappyHorse 1.0 I2V is the image-to-video version of HappyHorse 1.0 — the #1 model on the Artificial Analysis Image-to-Video Arena with an Elo of 1392. HappyHorse 1.0 I2V animates a single source image into native 1080p video using a 15B-parameter unified Transformer, preserving subject identity, color, lighting, and composition while adding physics-accurate motion.

How is HappyHorse 1.0 I2V ranked among image-to-video models?

On the Artificial Analysis Video Arena (a blind A/B human-preference Elo system), HappyHorse 1.0 I2V holds the #1 position in the no-audio image-to-video category at Elo 1392 — roughly 30–50 Elo points ahead of Seedance 2.0 and well ahead of Kling 3.0 Pro, Veo 3.1, and Runway Gen-4.5 as of early 2026.

What resolution and duration does HappyHorse 1.0 I2V output?

HappyHorse 1.0 I2V outputs native 720P or 1080P HD clips with selectable durations from 3 to 15 seconds. Output aspect ratio follows the source image proportions, and detail levels are suitable for ad delivery and social publishing without re-grading.

Does HappyHorse 1.0 I2V preserve the subject in the source image?

Yes. HappyHorse 1.0 I2V is designed to preserve facial features, product geometry, packaging details, and overall composition from the input frame. It applies motion, camera moves, and lighting evolution while keeping identity and layout stable across the clip.

What kind of prompts work best for HappyHorse 1.0 I2V?

Prompts should describe motion and camera language, not restate what the image shows. Use verbs like drift, dolly in, orbit, tilt, reveal, blink, and breathe; specify what must stay fixed (identity, packaging, background); add lighting evolution and atmosphere for cinematic results.

What is the architecture behind HappyHorse 1.0 I2V?

HappyHorse 1.0 I2V is powered by a 15-billion-parameter single-stream self-attention Transformer with 40 layers (a sandwich design — modality-specific embedding/decoding at the ends, 32 shared parameter layers in the middle). DMD-2 distillation reduces inference to 8 denoising steps without classifier-free guidance, enabling 1080p clips in roughly 38 seconds on an H100.

What are the typical use cases for HappyHorse 1.0 I2V?

HappyHorse 1.0 I2V is ideal for product reveal clips, portrait animation, character motion shots, cinematic ad teasers, packaging-to-presentation transitions, and short-form social content where you already have a strong still image and need it to move with stable identity.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Video Models
  • Seedance 1.0 Pro Fast
  • Seedance 1.0
  • Wan 2.2
  • Hailuo 02
  • Pika 2.2
  • Hailuo 2.3 Fast Standard
  • View All Models →
Image Models
  • seedream 4.0
  • Nano Banana 2 Edit
  • Nano Banana Pro
  • Seedream 5.0 Lite
  • Nano Banana 2
  • nano banana
  • View All Models →
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Examples Of HappyHorse 1.0 I2V Creations

Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...