wan-ai/wan-2-2/lora/image-to-video
Animate images into cinematic videos with LoRA style control, adjustable frames, frame rate, resolution, aspect ratio, and seed.
Table of contents
1. Get started
Use RunComfy's API to run wan-ai/wan-2-2/lora/image-to-video. For accepted inputs and outputs, see the model's schema.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/wan-ai/wan-2-2/lora/image-to-video \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "The woman is walking slowly",
"image_url": "https://playgrounds-storage-public.runcomfy.net/tools/7002/media-files/usecase5-1-2-input.jpg",
"lora_path": "https://huggingface.co/neph1/hard_cut_wan_lora/blob/main/hard_cut_200_wan_i2v_high.safetensors"
}'2. Authentication
Set the YOUR_API_TOKEN environment variable with your API key (manage keys in your Profile) and include it on every request as a Bearer token via the Authorization header: Authorization: Bearer $YOUR_API_TOKEN.
3. API reference
Submit a request
Submit an asynchronous generation job and immediately receive a request_id plus URLs to check status, fetch results, and cancel.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/wan-ai/wan-2-2/lora/image-to-video \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "The woman is walking slowly",
"image_url": "https://playgrounds-storage-public.runcomfy.net/tools/7002/media-files/usecase5-1-2-input.jpg",
"lora_path": "https://huggingface.co/neph1/hard_cut_wan_lora/blob/main/hard_cut_200_wan_i2v_high.safetensors"
}'Monitor request status
Fetch the current state for a request_id ("in_queue", "in_progress", "completed", or "cancelled").
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/status \
--header "Authorization: Bearer <token>"Retrieve request results
Retrieve the final outputs and metadata for the given request_id; if the job is not complete, the response returns the current state so you can continue polling.
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/result \
--header "Authorization: Bearer <token>"Cancel a request
Cancel a queued job by request_id, in-progress jobs cannot be cancelled.
curl --request POST \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/cancel \
--header "Authorization: Bearer <token>"4. File inputs
Hosted file (URL)
Provide a publicly reachable HTTPS URL. Ensure the host allows server‑side fetches (no login/cookies required) and isn't rate‑limited or blocking bots. Recommended limits: images ≤ 50 MB (~4K), videos ≤ 100 MB (~2–5 min @ 720p). Prefer stable or pre‑signed URLs for private assets.
5. Schema
Input schema
{
"type": "object",
"title": "Input",
"required": [
"prompt",
"image_url",
"lora_path"
],
"properties": {
"image_url": {
"title": "Image",
"description": "",
"type": "string",
"default": "https://playgrounds-storage-public.runcomfy.net/tools/7002/media-files/usecase5-1-2-input.jpg"
},
"prompt": {
"title": "Prompt",
"description": "",
"type": "string",
"default": "The woman is walking slowly"
},
"negative_prompt": {
"title": "Negative Prompt",
"description": "",
"type": "string",
"default": ""
},
"lora_path": {
"title": "Loras Path",
"description": "URL or the path to the LoRA weights.",
"type": "string",
"default": "https://huggingface.co/neph1/hard_cut_wan_lora/blob/main/hard_cut_200_wan_i2v_high.safetensors"
},
"lora_scale": {
"title": "Scale for Lora Path",
"description": "The scale of the LoRA weight. This is used to scale the LoRA weight before merging it with the base model.",
"type": "float",
"default": 1,
"minimum": 1,
"maximum": 4
},
"lora_transformer": {
"title": "Transformer for Lora Path",
"description": "Specifies the transformer to load the lora weight into. 'high' loads into the high-noise transformer, 'low' loads it into the low-noise transformer, while 'both' loads the LoRA into both transformers.",
"type": "string",
"enum": [
"high",
"low",
"both"
],
"default": "both"
},
"num_frames": {
"title": "Number of Frames",
"description": "",
"type": "integer",
"default": 81,
"minimum": 17,
"maximum": 161
},
"frames_per_second": {
"title": "Frames Per Second",
"description": "",
"type": "integer",
"default": 16,
"minimum": 4,
"maximum": 60
},
"resolution": {
"title": "Resolution",
"description": "",
"type": "string",
"enum": [
"480p",
"580p",
"720p"
],
"default": "480p"
},
"aspect_ratio": {
"title": "Aspect Ratio (W:H)",
"description": "",
"type": "string",
"enum": [
"16:9",
"9:16",
"1:1",
"auto"
],
"default": "auto"
},
"num_inference_steps": {
"title": "Number of Inference Steps",
"description": "",
"type": "integer",
"default": 27,
"minimum": 2,
"maximum": 40
},
"seed": {
"title": "Seed",
"description": "",
"type": "integer",
"maximum": 99999,
"minimum": 10000,
"default": 15775
}
}
}Output schema
{
"output": {
"type": "object",
"properties": {
"image": {
"type": "string",
"format": "uri",
"description": "single image URL"
},
"video": {
"type": "string",
"format": "uri",
"description": "single video URL"
},
"images": {
"type": "array",
"description": "multiple image URLs",
"items": { "type": "string", "format": "uri" }
},
"videos": {
"type": "array",
"description": "multiple video URLs",
"items": { "type": "string", "format": "uri" }
}
}
}
}