AI model for dynamic dubbing and expressive video creation from voice or footage.
| Parameter | Type | Default/Range | Description |
|---|---|---|---|
| prompt | string | "" | Required. Describe subject, action, camera, lighting, and optional dialogue/ambience for audio. |
| image_url | string (image URI) | "" | Required. Publicly accessible URL to the reference image used for image-to-video generation. |
| negative_prompt | string | "blur, distort, and low quality" | Optional. List attributes to avoid (e.g., artifacts, unwanted styles, lighting). |
| Parameter | Type | Default/Range | Description |
|---|---|---|---|
| duration | integer | 5 or 10 | Clip length in seconds. Choose 5s or 10s. |
| generate_audio | boolean | true | Toggle native audio generation. Supports English/Chinese speech; other languages auto-translate to English. For English, use lowercase; use uppercase for acronyms/proper nouns. |
Developers can integrate Kling 2.6 Pro via the RunComfy API using standard HTTP requests with simple JSON payloads. Authentication, job submission, and result polling follow familiar REST patterns, enabling quick pipeline adoption in production or toolchains.
Note: API Endpoint for Kling 2.6 Pro
If you do not have a reference image and want to generate directly from text, use Kling 2.6 Pro �?Text-to-Video, which is optimized for prompt-driven scene creation and native audio.
AI model for dynamic dubbing and expressive video creation from voice or footage.
Generate cinematic motion clips with precise control and audio sync
Use WAN 2.2 LoRA as latest AI tool for realistic video creation from text.
Render fluid, stylized scenes with fast, frame-consistent output
Create high quality videos from text prompts using Pika 2.2.
Next-gen tool turning prompts into cinematic 4K video clips with audio
Kling 2.6 Pro follows a Non-Commercial or restricted Open RAIL-style license depending on your access channel. Using Kling 2.6 Pro image-to-video outputs through RunComfy does not change the original licensing terms — you must comply with Kuaishou Technology’s official policies when using generated content for commercial distribution.
Kling 2.6 Pro currently supports up to 1080p resolution across common aspect ratios (16:9, 9:16, 1:1). Prompt inputs are limited to around 1,000 tokens, and image-to-video sessions allow 1–2 reference images per render. Exceeding these constraints can cause warnings or degraded fidelity.
You can start with the Kling 2.6 Pro Web Playground to test your image-to-video prompts, then move to RunComfy’s API using your API key. The API mirrors Playground behavior but supports automated scaling, enabling you to integrate Kling 2.6 Pro directly into commercial or enterprise workflows.
Kling 2.6 Pro introduces better facial motion, smoother transitions, built-in audio generation, and more accurate prompt interpretation than Kling 2.5. Its image-to-video results show stronger character consistency and lighting realism, bringing it closer to cinematic-grade output quality.
Yes, Kling 2.6 Pro provides a toggle to disable audio, allowing silent image-to-video clips when native sound is unnecessary. This feature is useful for projects where you plan to add voiceover or sound design later in post-production.
The average latency for Kling 2.6 Pro image-to-video generation is approximately 10–20 seconds per 5-second clip, depending on scene complexity and system load. RunComfy API requests queue intelligently to maintain stable concurrency across high-traffic periods.
No. RunComfy provides access to Kling 2.6 Pro under Kuaishou’s defined license. Even when generating image-to-video content through RunComfy, users must comply with the original model’s licensing terms, including any limitations around redistribution or commercial monetization.
Kling 2.6 Pro supports 16:9, 9:16, and 1:1 aspect ratios during image-to-video generation. Selecting the ratio before rendering ensures optimal composition and framing, particularly for platforms like YouTube (16:9) or TikTok (9:16).
Yes. Kling 2.6 Pro is optimized for scalability via RunComfy’s cloud API, with GPU resource pooling for large teams. Its image-to-video capabilities enable automated marketing content or storytelling applications, but commercial use still requires adherence to the model’s licensing conditions.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.










