logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
MODELS
Explore
All Models
LIBRARY
Generations
MODEL APIS
API Docs
API Keys
ACCOUNT
Usage

HappyHorse 1.0: Cinematic 1080p AI Video Generation Model | RunComfy

happyhorse/happyhorse-1.0/text-to-video

HappyHorse 1.0 is live on RunComfy for text-to-video generation. Use natural-language prompts to generate 3–15s videos in 720P or 1080P across five aspect ratios.

Describe the scene, subject motion, camera, lighting, and any audio you want implied in the generated video. Max 2500 characters.
Output video aspect ratio.
Output video resolution. HappyHorse supports 720P or 1080P.
Output video duration in seconds. Allowed values: 3–15.
Optional seed for reproducible generations. Use 0 to let the provider randomize.
Idle
$0.15 per second for 720P and $0.28 per second for 1080P.

Introduction To HappyHorse 1.0 Text-to-Video

HappyHorse 1.0 is now available on RunComfy for text-to-video generation through Alibaba. It delivers native 1080p output, strong motion quality, and multi-shot scene coherence with support for 16:9, 9:16, 1:1, 4:3, and 3:4 aspect ratios.
Ideal for: cinematic short-form video | marketing campaigns | social content | moodboards | storyboards

HappyHorse 1.0 on X: News and Updates

HappyHorse 1.0 on YouTube: Demos and Reviews

YouTube preview
YouTube preview

HappyHorse 1.0 Text-to-Video#


HappyHorse 1.0 on RunComfy uses Alibaba's async video-synthesis API with the happyhorse-1.0-t2v model. You provide a text prompt and choose a supported resolution/aspect-ratio combination, duration, optional seed, and whether the provider watermark should be included.


Output format: video / resolution tier: 720P or 1080P / duration: 3–15 seconds / aspect ratio: 16:9, 9:16, 1:1, 4:3, 3:4 / audio: not exposed in this template


Parameters#


ParameterRequiredTypeDefaultRange / OptionsDescription
prompt*Yesstring—max 2500 charsDescribe the scene, subject, motion, camera, lighting, and style for the video.
aspect_ratioNostring16:916:9, 9:16, 1:1, 4:3, 3:4Aspect ratio of the generated video.
resolutionNostring1080P720P, 1080POutput video resolution tier.
durationNointeger53–15Output video duration in seconds.
seedNointeger00 to 2147483647Optional random seed. Use 0 to let the provider choose one automatically.
watermarkNobooleantruetrue, falseWhether to include the provider watermark in the generated video.

How to Use#


  1. Write a prompt that clearly describes the subject, action, camera movement, and visual style.
  2. Choose an aspect ratio based on where the clip will be used.
  3. Pick 720P or 1080P output.
  4. Set the duration between 3 and 15 seconds.
  5. Optionally set a seed if you want more repeatable generations.
  6. Keep watermark enabled if you want the provider default behavior, or disable it when your use case allows.
  7. Submit the task and download the finished video.

Prompt Tips#


  • Describe motion over time, not just a static frame.
  • Include camera language such as close-up, tracking shot, crane move, handheld, or locked tripod.
  • Keep the scene focused on one clear visual beat when testing prompts.
  • Add material, lighting, and atmosphere details for more controllable output.
  • If you need more consistent comparisons between prompt variants, reuse the same seed.

Notes#


  • This template is text-to-video only.
  • RunComfy maps the selected resolution and aspect ratio into the provider's supported output size.
  • Duration outside 3–15 seconds is not exposed in this template.

Related Models

luma-ray-2/text-to-video

Generate high quality videos from text prompts using Luma Ray 2.

wan-2-2/speech-to-video

Turn photos into expressive videos with synced voice motion.

hunyuan/video-to-video

Transform one video into another style with Tencent Hunyuan Video.

pixverse/v5.5/image-to-video

Create dynamic, sound-synced motion clips from visuals for rich storytelling.

kling-video-o1/standard/text-to-video

Create lifelike cinematic video clips from prompts with motion control.

pikaframes

Animate between two images with smooth keyframe transitions using Pikaframes.

Frequently Asked Questions

What is HappyHorse 1.0?

HappyHorse 1.0 is a next-generation AI video model ranked #1 on the Artificial Analysis Video Arena for both text-to-video (Elo 1333) and image-to-video (Elo 1392). It generates native 1080p video with advanced motion synthesis, multi-shot character consistency, and multilingual support across six languages.

How is HappyHorse 1.0 ranked on the Artificial Analysis Video Arena?

The Artificial Analysis Video Arena ranks models through blind user voting — participants compare two videos generated from the same prompt without knowing which model made which, then pick the better result. Votes feed into an Elo rating system. It holds the highest Elo in both text-to-video and image-to-video (no audio) categories as of April 2026.

What video resolution does HappyHorse 1.0 produce?

The model outputs native 1080p HD resolution. Video includes rich color grading, accurate lighting, and film-grade detail suitable for broadcast and professional production without additional post-processing.

Does HappyHorse 1.0 support audio generation?

Yes. The model generates synchronized audio alongside video in one pass — including dialogue, ambient sounds, and Foley effects. It ranks #2 in the with-audio categories on the Artificial Analysis leaderboard.

What languages does HappyHorse 1.0 support?

Six languages are natively supported: Chinese, English, Japanese, Korean, German, and French. Prompts in any supported language produce high-quality video with full linguistic nuance.

What is multi-shot storytelling in HappyHorse 1.0?

Multi-shot storytelling allows the model to generate video sequences with multiple shots while maintaining consistency in characters, wardrobe, visual style, and atmosphere across scene transitions — eliminating the need for manual editing between clips.

Can this model generate video from images?

Yes. The model supports both text-to-video and image-to-video through a unified pipeline. Upload a static image to animate it with intelligent motion synthesis, or describe a scene entirely through text.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Video Models
  • Seedance 1.0 Pro Fast
  • Hailuo 2.3 Fast Standard
  • Wan 2.2
  • Seedance 1.5 Pro
  • Seedance 1.0
  • Veo 3.1 Fast
  • View All Models →
Image Models
  • Wan 2.6 Image to Image
  • Nano Banana 2 Edit
  • Nano Banana Pro
  • seedream 4.0
  • nano banana
  • Seedream 4.0 sequential
  • View All Models →
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Examples Of HappyHorse 1.0 Creations

Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...
Video thumbnail
Loading...