Turn still portraits into expressive, lifelike videos with control and precision.


HappyHorse 1.0 Video Edit on RunComfy uses Alibaba's async video-synthesis API with the happyhorse-1.0-video-edit model. You upload a source video, write a natural-language instruction describing the edit, and optionally attach up to five reference images. HappyHorse 1.0 Video Edit then returns an edited clip that follows the prompt while keeping the underlying motion, layout, and identity of the source footage stable.
Why HappyHorse 1.0 Video Edit matters: most video models only generate, but HappyHorse 1.0 Video Edit is purpose-built for instruction-driven editing. By conditioning on the input video, the prompt, and reference imagery together, HappyHorse 1.0 Video Edit can perform style change, local replacement, outfit swap, and pattern transfer without requiring you to mask or rotoscope by hand. The result is a fast, prompt-first way to iterate on existing footage.
Output format: video / resolution tier: 720P or 1080P / output duration: 3–15 seconds / source video: 3–60 seconds (clips longer than 15s are truncated to the first 15s) / reference images: 0–5
| Parameter | Required | Type | Default | Range / Options | Description |
|---|---|---|---|---|---|
| video_url* | Yes | string | — | MP4 or MOV, 3–60s, ≤100MB | Source video that HappyHorse 1.0 Video Edit will edit. |
| prompt* | Yes | string | — | max 5000 non-CN / 2500 CN chars | Instruction describing the edit you want HappyHorse 1.0 Video Edit to apply. |
| reference_image_url_1…5 | No | string | — | JPEG, JPG, PNG, WEBP | Up to five reference images that guide HappyHorse 1.0 Video Edit. |
| resolution | No | string | 1080P | 720P, 1080P | Output resolution tier of HappyHorse 1.0 Video Edit. |
| audio_setting | No | string | auto | auto, origin | origin preserves the source audio track. |
| seed | No | integer | 0 | 0 to 2147483647 | Optional random seed for HappyHorse 1.0 Video Edit. |
| watermark | No | boolean | true | true, false | Keep the provider "Happy Horse" watermark. |
Turn still portraits into expressive, lifelike videos with control and precision.
Create lifelike avatars via multimodal synthesis with Omnihuman 1.5.
Create lifelike cinematic video clips from prompts with motion control.
Transform still visuals into cinematic motion clips with smooth, realistic transitions and creative flexibility.
Next-gen tool turning prompts into cinematic 4K video clips with audio
AI-driven motion conversion tool enabling precise, stable animation creation
HappyHorse 1.0 Video Edit is the instruction-based video editing model in the HappyHorse 1.0 family, available on RunComfy through Alibaba. HappyHorse 1.0 Video Edit takes a source video, a text prompt, and up to five reference images, then returns an edited clip that follows the instruction while preserving motion and structure from the original footage.
HappyHorse 1.0 Video Edit handles style transfer, local replacement, outfit swap, pattern transfer, color regrade, and other instruction-driven modifications. Because HappyHorse 1.0 Video Edit conditions on the input video together with the prompt and reference imagery, it can target specific subjects without manual masking or rotoscoping.
HappyHorse 1.0 Video Edit accepts MP4 or MOV videos (H.264 recommended), 3–60 seconds long, with the long side ≤2160px, short side ≥320px, aspect ratio between 1:2.5 and 2.5:1, file size ≤100MB, and frame rate >8fps. Reference images can be JPEG, JPG, PNG, or WEBP, at least 300px on each side, with the same aspect ratio range and ≤10MB each — up to five references per HappyHorse 1.0 Video Edit call.
HappyHorse 1.0 Video Edit outputs 720P or 1080P clips. Output duration matches the input video up to 15 seconds; if the input exceeds 15 seconds, HappyHorse 1.0 Video Edit truncates from the start and uses the first 15 seconds as the working segment.
Lead with the change verb (replace, restyle, swap, recolor) and name both the target subject and what should remain unchanged. When attaching reference images, tell HappyHorse 1.0 Video Edit explicitly how to use them — for example, "apply the striped pattern from the reference image to the character's sweater". One clear edit per call is more reliable than chaining several edits inside a single HappyHorse 1.0 Video Edit prompt.
Yes. Set audio_setting to origin and HappyHorse 1.0 Video Edit will preserve the input video's original audio track on the output. The default auto lets the model decide based on the edit and prompt.
HappyHorse 1.0 Video Edit is ideal for outfit and wardrobe swaps, character restyling, brand-pattern application, localized object replacement, scene-level style transfer (anime, pixel art, oil painting, etc.), and rapid creative remixes of existing footage where you want to keep motion and identity stable.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.





