Use WAN 2.2 LoRA as latest AI tool for realistic video creation from text.
Kling O1: Cinematic Image-to-Video Playground & API | RunComfy
Kuaishou's Kling O1 transforms images into dynamic videos with support for start and end frames, offering industry-leading motion consistency and realism.
Introduction to Kling Video O1 Image-to-Video
Kling O1 is a cutting-edge generative model by Kuaishou that specializes in transforming static images into high-fidelity, cinema-grade videos with precise motion control. Designed for creators and professional teams, this Kling O1 Image-to-Video variant enables seamless start-to-end frame animation. For developers, Kling Video O1 Image-to-Video on RunComfy can be used both in the browser and via an HTTP API, so you don’t need to host or scale the model yourself.
Examples of Kling O1 Media Transformations






Related Playgrounds
AI-driven tool for seamless object separation and smooth video compositing.
Generate sharp HD videos from text with Minimax Hailuo 02 Pro.
Build a scene from 1–6 images and animate it into a video.
Create rapid high-quality video drafts with precise style and speed
Cinematic portrait video maker with prompt control and emotion-rich motion
Frequently Asked Questions
What specific tasks can the Kling O1 model handle on RunComfy?
The Kling O1 model on RunComfy is specialized for high-fidelity Image-to-Video generation. Unlike standard video models, this Kling O1 workflow allows you to define both a starting frame and an ending frame (using start_image_url and end_image_url), enabling precise control over the video's narrative and motion trajectory.
Can I use Kling O1 output for commercial projects?
Commercial usage of Kling O1 depends on Kuaishou's specific licensing terms, as they are the original model creators. While RunComfy provides the infrastructure and API to run the model, you must ensure your project complies with the official Kling O1 usage policy regarding commercial rights and attribution.
What are the strict image requirements for inputs in Kling O1?
To ensure successful generation with Kling O1, your input images must be under 10 MB in file size. Additionally, both width and height must exceed 300 pixels, and the aspect ratio must fall between 0.40 and 2.50. Images outside these Kling O1 specifications may result in validation errors.
How does the API handle concurrency and latency for Kling O1?
Video generation with Kling O1 is computationally intensive. While generation time varies based on the selected duration (5s or 10s), RunComfy’s infrastructure manages concurrency automatically. This ensures that even during high traffic, your Kling O1 API requests are queued and processed efficiently without you needing to manage GPU scaling.
How do I transition from the Playground to a production API integration?
Transitioning is seamless. Once you have tested your prompts and parameters in the Kling O1 Playground, you can use the exact same inputs (like prompt, start_image_url, and duration) in your code. Simply call the RunComfy HTTP endpoint to integrate Kling O1 directly into your application.
Does Kling O1 support frame interpolation between two specific images?
Yes. The Kling O1 Image-to-Video mode is designed for this. You should provide both a start image and an end image in the API parameters. In your prompt, use the references @Image1 and @Image2 to guide Kling O1 on how to transition between these two visual states.
