wan-ai/wan-2-2/lora/text-to-image
Create cinematic images with LoRA-based style control, adjustable aspect ratios, inference steps, formats, and reproducible seeds.
Table of contents
1. Get started
Use RunComfy's API to run wan-ai/wan-2-2/lora/text-to-image. For accepted inputs and outputs, see the model's schema.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/wan-ai/wan-2-2/lora/text-to-image \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "Cinematic portrait of a woman in a rain soaked city at night, practical lighting, anamorphic bokeh, realistic skin texture",
"lora_path": "https://huggingface.co/Instara/instagirlmix-wan-2.2/blob/main/WAN2.2_HighNoise_InstagirlMix_V1.safetensors"
}'2. Authentication
Set the YOUR_API_TOKEN environment variable with your API key (manage keys in your Profile) and include it on every request as a Bearer token via the Authorization header: Authorization: Bearer $YOUR_API_TOKEN.
3. API reference
Submit a request
Submit an asynchronous generation job and immediately receive a request_id plus URLs to check status, fetch results, and cancel.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/wan-ai/wan-2-2/lora/text-to-image \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "Cinematic portrait of a woman in a rain soaked city at night, practical lighting, anamorphic bokeh, realistic skin texture",
"lora_path": "https://huggingface.co/Instara/instagirlmix-wan-2.2/blob/main/WAN2.2_HighNoise_InstagirlMix_V1.safetensors"
}'Monitor request status
Fetch the current state for a request_id ("in_queue", "in_progress", "completed", or "cancelled").
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/status \
--header "Authorization: Bearer <token>"Retrieve request results
Retrieve the final outputs and metadata for the given request_id; if the job is not complete, the response returns the current state so you can continue polling.
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/result \
--header "Authorization: Bearer <token>"Cancel a request
Cancel a queued job by request_id, in-progress jobs cannot be cancelled.
curl --request POST \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/cancel \
--header "Authorization: Bearer <token>"4. File inputs
Hosted file (URL)
Provide a publicly reachable HTTPS URL. Ensure the host allows server‑side fetches (no login/cookies required) and isn't rate‑limited or blocking bots. Recommended limits: images ≤ 50 MB (~4K), videos ≤ 100 MB (~2–5 min @ 720p). Prefer stable or pre‑signed URLs for private assets.
5. Schema
Input schema
{
"type": "object",
"title": "Input",
"required": [
"prompt",
"lora_path"
],
"properties": {
"prompt": {
"title": "Prompt",
"description": "",
"type": "string",
"default": "Cinematic portrait of a woman in a rain soaked city at night, practical lighting, anamorphic bokeh, realistic skin texture"
},
"negative_prompt": {
"title": "Negative Prompt",
"description": "",
"type": "string",
"default": ""
},
"lora_path": {
"title": "Loras Path",
"description": "URL or the path to the LoRA weights.",
"type": "string",
"default": "https://huggingface.co/Instara/instagirlmix-wan-2.2/blob/main/WAN2.2_HighNoise_InstagirlMix_V1.safetensors"
},
"lora_scale": {
"title": "Scale for Lora Path",
"description": "The scale of the LoRA weight. This is used to scale the LoRA weight before merging it with the base model.",
"type": "float",
"default": 1,
"minimum": 1,
"maximum": 4
},
"lora_transformer": {
"title": "Transformer for Lora Path",
"description": "Specifies the transformer to load the lora weight into. 'high' loads into the high-noise transformer, 'low' loads it into the low-noise transformer, while 'both' loads the LoRA into both transformers.",
"type": "string",
"enum": [
"high",
"low",
"both"
],
"default": "both"
},
"image_size": {
"title": "Aspect Ratio (W:H)",
"description": "",
"type": "string",
"enum": [
"square_hd",
"square",
"portrait_4_3",
"portrait_16_9",
"landscape_4_3",
"landscape_16_9",
"Custom"
],
"default": "square_hd"
},
"image-format": {
"title": "Image Format",
"description": "",
"type": "string",
"enum": [
"png",
"jpeg"
],
"default": "jpeg"
},
"num_inference_steps": {
"title": "Number of Inference Steps",
"description": "",
"type": "integer",
"default": 27,
"minimum": 2,
"maximum": 40
},
"seed": {
"title": "Seed",
"description": "",
"type": "integer",
"maximum": 99999,
"minimum": 10000,
"default": 74965
}
}
}Output schema
{
"output": {
"type": "object",
"properties": {
"image": {
"type": "string",
"format": "uri",
"description": "single image URL"
},
"video": {
"type": "string",
"format": "uri",
"description": "single video URL"
},
"images": {
"type": "array",
"description": "multiple image URLs",
"items": { "type": "string", "format": "uri" }
},
"videos": {
"type": "array",
"description": "multiple video URLs",
"items": { "type": "string", "format": "uri" }
}
}
}
}