Create lifelike speech-synced visuals from scripts or clips with Kling Lipsync for precise facial animation and realistic results.
Integrate the powerful Kling O1 Image-to-Video capabilities into your applications with ease using the RunComfy API. Below is a quick guide to getting started.
To use the Kling O1 API, you need an API key from your RunComfy dashboard. Ensure your account is funded to handle requests.
The API returns a JSON object containing the output URL (the generated video file) upon completion. The specific input parameters for Kling O1 are defined as follows:
The prompt (string) is required to describe the desired motion or scene changes. You should use @Image1 to reference the start frame and @Image2 to reference the end frame within your text description.
The start_image_url (string) is the mandatory first frame of the video. The API accepts standard image formats like jpg, png, and webp. Ensure the file size is under 10 MB, and both dimensions exceed 300 pixels.
The end_image_url (string) is an mandatory parameter that defines the last frame of the video, allowing for precise transition control. It shares the same file size and resolution validation rules as the start image.
Finally, the duration (integer) parameter determines the length of the generated video. You can choose between a 5-second or 10-second generation.
The Kling O1 model represents a significant leap in AI video generation. Unlike standard text-to-video models, this specific Kling O1 workflow focuses on Image-to-Video synthesis, allowing users to define the exact starting and ending visual states of a clip. This makes it ideal for storyboarding, advertising, and content creation where specific visual continuity is required.
Kling O1 excels in understanding complex physics and lighting. Key capabilities include:
To ensure optimal performance when using Kling O1, please adhere to the following specifications:
Our browser-based playground allows you to test Kling O1 immediately. Simply upload your start image, optionally add an end image, and watch the model generate professional-grade video in seconds. It is the fastest way to iterate on prompts before integrating the API.
You can seamlessly switch from prototyping to production. The exact parameters you tweak in the playground are available via our scalable <a href="https://www.runcomfy.com/models/kling/kling-video-o1/image-to-video">API endpoint </a>.
Create lifelike speech-synced visuals from scripts or clips with Kling Lipsync for precise facial animation and realistic results.
Animate an image into a high quality video with OpenAI Sora 2 Pro.
Transform reference clips with cinematic fidelity, refined motion, and seamless style control for creative professionals.
Consistent characters, objects, and scenes in any setting or angle.
Precise prompts, lifelike motion, vivid video quality.
AI-driven tool for seamless object separation and smooth video compositing.
The Kling O1 model on RunComfy is specialized for high-fidelity Image-to-Video generation. Unlike standard video models, this Kling O1 workflow allows you to define both a starting frame and an ending frame (using start_image_url and end_image_url), enabling precise control over the video's narrative and motion trajectory.
Commercial usage of Kling O1 depends on Kuaishou's specific licensing terms, as they are the original model creators. While RunComfy provides the infrastructure and API to run the model, you must ensure your project complies with the official Kling O1 usage policy regarding commercial rights and attribution.
To ensure successful generation with Kling O1, your input images must be under 10 MB in file size. Additionally, both width and height must exceed 300 pixels, and the aspect ratio must fall between 0.40 and 2.50. Images outside these Kling O1 specifications may result in validation errors.
Video generation with Kling O1 is computationally intensive. While generation time varies based on the selected duration (5s or 10s), RunComfy’s infrastructure manages concurrency automatically. This ensures that even during high traffic, your Kling O1 API requests are queued and processed efficiently without you needing to manage GPU scaling.
Transitioning is seamless. Once you have tested your prompts and parameters in the Kling O1 Playground, you can use the exact same inputs (like prompt, start_image_url, and duration) in your code. Simply call the RunComfy HTTP endpoint to integrate Kling O1 directly into your application.
Yes. The Kling O1 Image-to-Video mode is designed for this. You should provide both a start image and an end image in the API parameters. In your prompt, use the references @Image1 and @Image2 to guide Kling O1 on how to transition between these two visual states.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.





