Use RunComfy's API to run ltx/ltx-2-19b/video-to-video/lora. For accepted inputs and outputs, see the model's schema.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/ltx/ltx-2-19b/video-to-video/lora \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "black-and-white video, a girl is playing a guitar, film grain",
"video_url": "https://playgrounds-storage-public.runcomfy.net/tools/7312/media-files/input-1-1.mp4",
"loras": [
{
"path": "Lightricks/LTX-2-19b-IC-LoRA-Detailer",
"scale": 1
}
]
}'Set the YOUR_API_TOKEN environment variable with your API key (manage keys in your Profile) and include it on every request as a Bearer token via the Authorization header: Authorization: Bearer $YOUR_API_TOKEN.
Submit an asynchronous generation job and immediately receive a request_id plus URLs to check status, fetch results, and cancel.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/ltx/ltx-2-19b/video-to-video/lora \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "black-and-white video, a girl is playing a guitar, film grain",
"video_url": "https://playgrounds-storage-public.runcomfy.net/tools/7312/media-files/input-1-1.mp4",
"loras": [
{
"path": "Lightricks/LTX-2-19b-IC-LoRA-Detailer",
"scale": 1
}
]
}'Fetch the current state for a request_id ("in_queue", "in_progress", "completed", or "cancelled").
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/status \
--header "Authorization: Bearer <token>"Retrieve the final outputs and metadata for the given request_id; if the job is not complete, the response returns the current state so you can continue polling.
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/result \
--header "Authorization: Bearer <token>"Cancel a queued job by request_id, in-progress jobs cannot be cancelled.
curl --request POST \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/cancel \
--header "Authorization: Bearer <token>"Provide a publicly reachable HTTPS URL. Ensure the host allows server‑side fetches (no login/cookies required) and isn't rate‑limited or blocking bots. Recommended limits: images ≤ 50 MB (~4K), videos ≤ 100 MB (~2–5 min @ 720p). Prefer stable or pre‑signed URLs for private assets.
{
"type": "object",
"title": "Input",
"required": [
"prompt",
"video_url",
"loras"
],
"properties": {
"prompt": {
"title": "Prompt",
"description": "",
"type": "string",
"default": "black-and-white video, a girl is playing a guitar, film grain"
},
"video_url": {
"title": "Video URL",
"description": "The URL of the video to generate the video from.",
"type": "string",
"default": "https://playgrounds-storage-public.runcomfy.net/tools/7312/media-files/input-1-1.mp4"
},
"image_url": {
"title": "Image",
"description": "An optional URL of an image to use as the first frame of the video.",
"type": "string"
},
"loras": {
"title": "LoRAs",
"description": "List of LoRAs to apply (maximum 10).",
"type": "array",
"default": [
{
"path": "Lightricks/LTX-2-19b-IC-LoRA-Detailer",
"scale": 1
}
],
"items": {
"path": {
"title": "LoRA Path",
"description": "URL, HuggingFace repo ID (owner/repo), or local path to LoRA weights.",
"type": "string",
"format": "str",
"default": ""
},
"scale": {
"title": "LoRA Scale",
"description": "Scale of the LoRA model.",
"type": "float",
"format": "float_slider_with_range",
"minimum": 0,
"maximum": 4,
"default": 1
}
},
"maxItems": 10,
"minItems": 0
},
"match_video_length": {
"title": "Match Video Length",
"description": "When enabled, the number of frames will be calculated based on the video duration and FPS. When disabled, use the specified num_frames.",
"type": "boolean",
"default": true
},
"num_frames": {
"title": "Number of Frames",
"description": "The number of frames to generate.",
"type": "integer",
"minimum": 9,
"maximum": 481,
"default": 121
},
"video_size": {
"title": "Video Size",
"description": "The size of the generated video.",
"type": "string",
"enum": [
"auto",
"square_hd",
"square",
"portrait_4_3",
"portrait_16_9",
"landscape_4_3",
"landscape_16_9"
],
"default": "auto"
},
"generate_audio": {
"title": "Generate Audio",
"description": "Whether to generate audio for the video.",
"type": "boolean",
"default": true
},
"use_multiscale": {
"title": "Use Multi-scale Generation",
"description": "Whether to use multi-scale generation. If true, the model generates a smaller-scale version first, then refines details at the target scale.",
"type": "boolean",
"default": true
},
"match_input_fps": {
"title": "Match Input FPS",
"description": "When true, match the output FPS to the input video's FPS instead of using the default target FPS.",
"type": "boolean",
"default": true
},
"fps": {
"title": "Frames Per Second",
"description": "The frames per second of the generated video.",
"type": "float",
"minimum": 1,
"maximum": 60,
"default": 25
},
"guidance_scale": {
"title": "Guidance Scale",
"description": "The guidance scale to use.",
"type": "float",
"minimum": 1,
"maximum": 10,
"default": 3
},
"num_inference_steps": {
"title": "Number of Inference Steps",
"description": "The number of inference steps to use.",
"type": "integer",
"minimum": 8,
"maximum": 50,
"default": 40
},
"camera_lora": {
"title": "Camera LoRA",
"description": "The camera LoRA to use for controlling camera movement.",
"type": "string",
"enum": [
"dolly_in",
"dolly_out",
"dolly_left",
"dolly_right",
"jib_up",
"jib_down",
"static",
"none"
],
"default": "none"
},
"camera_lora_scale": {
"title": "Camera LoRA Scale",
"description": "The scale of the camera LoRA to use for camera motion control.",
"type": "float",
"minimum": 0,
"maximum": 1,
"default": 1
},
"negative_prompt": {
"title": "Negative Prompt",
"description": "The negative prompt to guide the generation away from undesired qualities.",
"type": "string",
"default": "blurry, out of focus, overexposed, underexposed, low contrast, washed out colors, excessive noise, grainy texture, poor lighting, flickering, motion blur, distorted proportions, unnatural skin tones, deformed facial features, asymmetrical face, missing facial features, extra limbs, disfigured hands, wrong hand count, artifacts around text, inconsistent perspective, camera shake, incorrect depth of field, background too sharp, background clutter, distracting reflections, harsh shadows, inconsistent lighting direction, color banding, cartoonish rendering, 3D CGI look, unrealistic materials, uncanny valley effect, incorrect ethnicity, wrong gender, exaggerated expressions, wrong gaze direction, mismatched lip sync, silent or muted audio, distorted voice, robotic voice, echo, background noise, off-sync audio,incorrect dialogue, added dialogue, repetitive speech, jittery movement, awkward pauses, incorrect timing, unnatural transitions, inconsistent framing, tilted camera, flat lighting, inconsistent tone, cinematic oversaturation, stylized filters, or AI artifacts."
},
"seed": {
"title": "Seed",
"description": "",
"type": "integer",
"default": 0
},
"enable_prompt_expansion": {
"title": "Enable Prompt Expansion",
"description": "Whether to enable prompt expansion.",
"type": "boolean",
"default": false
},
"video_output_type": {
"title": "Video Output Type",
"description": "The output type of the generated video.",
"type": "string",
"enum": [
"X264 (.mp4)",
"VP9 (.webm)",
"PRORES4444 (.mov)",
"GIF (.gif)"
],
"default": "X264 (.mp4)"
},
"video_quality": {
"title": "Video Quality",
"description": "The quality of the generated video.",
"type": "string",
"enum": [
"low",
"medium",
"high",
"maximum"
],
"default": "high"
},
"preprocessor": {
"title": "Preprocessor",
"description": "The preprocessor to use for the video.",
"type": "string",
"enum": [
"depth",
"canny",
"pose",
"none"
],
"default": "none"
},
"ic_lora": {
"title": "IC-LoRA",
"description": "The type of IC-LoRA to load.",
"type": "string",
"enum": [
"match_preprocessor",
"canny",
"depth",
"pose",
"detailer",
"none"
],
"default": "match_preprocessor"
},
"ic_lora_scale": {
"title": "IC-LoRA Scale",
"description": "The scale of the IC-LoRA to use.",
"type": "float",
"minimum": 0,
"maximum": 1,
"default": 1
},
"video_strength": {
"title": "Video Strength",
"description": "Video conditioning strength. Lower values represent more freedom given to the model to change the video content.",
"type": "float",
"minimum": 0,
"maximum": 1,
"default": 1
}
}
}{
"output": {
"type": "object",
"properties": {
"image": {
"type": "string",
"format": "uri",
"description": "single image URL"
},
"video": {
"type": "string",
"format": "uri",
"description": "single video URL"
},
"images": {
"type": "array",
"description": "multiple image URLs",
"items": { "type": "string", "format": "uri" }
},
"videos": {
"type": "array",
"description": "multiple video URLs",
"items": { "type": "string", "format": "uri" }
}
}
}
}RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.