Flux 2 LoRA Edit: Precision Image-to-Image Editing AI on playground and API | RunComfy
Transform and edit your images using the Black Forest Labs Flux 2 LoRA model to apply custom styles, characters, or details via LoRA adapters on RunComfy.
Table of contents
1. Get started
Use RunComfy's API to run blackforestlabs/flux-2/lora/edit. For accepted inputs and outputs, see the model's schema.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/blackforestlabs/flux-2/lora/edit \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "<sks> back view shot",
"image_urls": [
"https://playgrounds-storage-public.runcomfy.net/tools/7242/media-files/input-1-1.png"
]
}'2. Authentication
Set the YOUR_API_TOKEN environment variable with your API key (manage keys in your Profile) and include it on every request as a Bearer token via the Authorization header: Authorization: Bearer $YOUR_API_TOKEN.
3. API reference
Submit a request
Submit an asynchronous generation job and immediately receive a request_id plus URLs to check status, fetch results, and cancel.
curl --request POST \
--url https://model-api.runcomfy.net/v1/models/blackforestlabs/flux-2/lora/edit \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <token>" \
--data '{
"prompt": "<sks> back view shot",
"image_urls": [
"https://playgrounds-storage-public.runcomfy.net/tools/7242/media-files/input-1-1.png"
]
}'Monitor request status
Fetch the current state for a request_id ("in_queue", "in_progress", "completed", or "cancelled").
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/status \
--header "Authorization: Bearer <token>"Retrieve request results
Retrieve the final outputs and metadata for the given request_id; if the job is not complete, the response returns the current state so you can continue polling.
curl --request GET \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/result \
--header "Authorization: Bearer <token>"Cancel a request
Cancel a queued job by request_id, in-progress jobs cannot be cancelled.
curl --request POST \
--url https://model-api.runcomfy.net/v1/requests/{request_id}/cancel \
--header "Authorization: Bearer <token>"4. File inputs
Hosted file (URL)
Provide a publicly reachable HTTPS URL. Ensure the host allows server‑side fetches (no login/cookies required) and isn't rate‑limited or blocking bots. Recommended limits: images ≤ 50 MB (~4K), videos ≤ 100 MB (~2–5 min @ 720p). Prefer stable or pre‑signed URLs for private assets.
5. Schema
Input schema
{
"type": "object",
"title": "Input",
"required": [
"prompt",
"image_urls"
],
"properties": {
"prompt": {
"title": "Prompt",
"description": "",
"type": "string",
"default": "<sks> back view shot"
},
"image_urls": {
"title": "Image URLs",
"description": "URLs of up to 3 images for editing. If more are provided, only the first 3 will be used.",
"type": "array",
"items": {
"type": "string",
"format": "image_uri"
},
"maxItems": 3,
"minItems": 0,
"default": [
"https://playgrounds-storage-public.runcomfy.net/tools/7242/media-files/input-1-1.png"
]
},
"loras": {
"title": "LoRAs",
"description": "List of LoRA weights to apply (maximum 3). Each LoRA can be a URL, HuggingFace repo ID, or local path.",
"type": "array",
"default": [
{
"path": "lovis93/Flux-2-Multi-Angles-LoRA-v2",
"scale": 1
}
],
"items": {
"path": {
"title": "LoRA Path",
"description": "Path to the LoRA model.",
"type": "string",
"format": "str",
"default": "URL, HuggingFace repo ID (owner/repo), or local path to LoRA weights."
},
"scale": {
"title": "LoRA Scale",
"description": "Scale factor for LoRA application (0.0 to 4.0). ",
"type": "float",
"format": "float_slider_with_range",
"minimum": 0,
"maximum": 4,
"default": 1
}
},
"maxItems": 3,
"minItems": 0
},
"guidance_scale": {
"title": "Guidance Scale",
"description": "How closely the model should follow the prompt.",
"type": "float",
"default": 2.5,
"minimum": 0,
"maximum": 20
},
"seed": {
"title": "Seed",
"description": "",
"type": "integer",
"default": null
},
"num_inference_steps": {
"title": "Number of Inference Steps",
"description": "The number of inference steps to perform.",
"type": "integer",
"default": 28,
"minimum": 4,
"maximum": 50
},
"image_size": {
"title": "Image Size",
"description": "Choose a preset size or select Custom to specify width and height between 512 and 2048 pixels.",
"type": "string",
"enum": [
"square_hd",
"square",
"portrait_4_3",
"portrait_16_9",
"landscape_4_3",
"landscape_16_9"
],
"default": "square_hd"
},
"output_format": {
"title": "Output Format",
"description": "The format of the generated image.",
"type": "string",
"enum": [
"jpeg",
"png",
"webp"
],
"default": "png"
}
}
}Output schema
{
"output": {
"type": "object",
"properties": {
"image": {
"type": "string",
"format": "uri",
"description": "single image URL"
},
"video": {
"type": "string",
"format": "uri",
"description": "single video URL"
},
"images": {
"type": "array",
"description": "multiple image URLs",
"items": { "type": "string", "format": "uri" }
},
"videos": {
"type": "array",
"description": "multiple video URLs",
"items": { "type": "string", "format": "uri" }
}
}
}
}