logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
ComfyUI>Workflows>Z-Image LoRA Inference | AI Toolkit ComfyUI

Z-Image LoRA Inference | AI Toolkit ComfyUI

Workflow Name: RunComfy/Z-Image-Base-LoRA-ComfyUI-Inference
Workflow ID: 0000...1359
Deploy AI Toolkit-trained Z-Image LoRAs inside ComfyUI with pipeline-level accuracy. The RCZimage node encapsulates the Tongyi-MAI/Z-Image inference pipeline—including FlowMatchEulerDiscrete scheduling and internal LoRA injection—so generation stays consistent with AI Toolkit preview behavior rather than drifting through a generic sampling setup. Load your adapter from a local file in models/loras, a direct .safetensors URL, or a Hugging Face path, and set lora_scale to control adapter strength. For the closest match to your training previews, mirror the resolution, step count, guidance scale, and seed from your sample config. The workflow outputs standard images through SaveImage for straightforward comparison.

Z-Image Base LoRA ComfyUI Inference: training-aligned generation with AI Toolkit LoRAs

This production-ready RunComfy workflow lets you run AI Toolkit–trained Z-Image LoRA adapters in ComfyUI with training-matched results. Built around RC Z-Image (RCZimage)—a pipeline-level custom node open-sourced by RunComfy (source)—the workflow wraps the Tongyi-MAI/Z-Image inference pipeline rather than relying on a generic sampler graph. Your adapter is injected via lora_path and lora_scale inside that pipeline, keeping LoRA application consistent with how AI Toolkit produces training previews.

Why Z-Image Base LoRA ComfyUI Inference often looks different in ComfyUI

AI Toolkit training previews are rendered by a model-specific inference pipeline—scheduler configuration, conditioning flow, and LoRA injection all happen inside that pipeline. A standard ComfyUI sampler graph assembles these pieces differently, so even identical prompts, seeds, and step counts can yield noticeably different output. The gap is not caused by a single wrong parameter; it is a pipeline-level mismatch. RCZimage recovers training-aligned behavior by wrapping the Z-Image pipeline directly and applying your LoRA within it. Implementation reference: `src/pipelines/z_image.py`.

How to use the Z-Image Base LoRA ComfyUI Inference workflow

Step 1: Get the LoRA path and load it into the workflow (2 options)

Option A — RunComfy training result → download to local ComfyUI:

  1. Go to Trainer → LoRA Assets
  2. Find the LoRA you want to use
  3. Click the ⋮ (three-dot) menu on the right → select Copy LoRA Link
  4. In the ComfyUI workflow page, paste the copied link into the Download input field at the top-right corner of the UI
  5. Before clicking Download, make sure the target folder is set to ComfyUI → models → loras (this folder must be selected as the download target)
  6. Click Download — this saves the LoRA file into the correct models/loras directory
  7. After the download finishes, refresh the page
  8. The LoRA now appears in the LoRA select dropdown in the workflow — select it
Z-Image Base: copy LoRA link in Trainer UI

Option B — Direct LoRA URL (overrides Option A):

  1. Paste the direct .safetensors download URL into the path / url input field of the LoRA node
  2. When a URL is provided here, it overrides Option A — the workflow loads the LoRA directly from the URL at runtime
  3. No local download or file placement is required

Tip: the URL must point to the actual .safetensors file, not a webpage or redirect.

Z-Image Base: paste LoRA URL into path/url on the LoRA node

Step 2: Match inference parameters with your training sample settings

Set lora_scale on the LoRA node — start at the same strength you used during training previews, then adjust as needed.

The remaining parameters live on the Generate node:

  • prompt — your text prompt; include any trigger words you used during training
  • negative_prompt — leave empty unless your training YAML included negatives
  • width / height — output resolution; match your preview size for direct comparison (multiples of 32)
  • sample_steps — number of inference steps; Z-Image base defaults to 30 (use the same count from your preview config)
  • guidance_scale — CFG strength; default is 4.0 (mirror your training preview value first)
  • seed — fix a seed to reproduce specific outputs; change it to explore variations
  • seed_mode — choose fixed or randomize
  • hf_token — Hugging Face token; required only if the base model or LoRA repo is gated/private

Training alignment tip: if you customized any sampling values during training, copy those exact values into the corresponding fields. If you trained on RunComfy, open Trainer → LoRA Assets → Config to see the resolved YAML and copy preview/sample settings into the node.

Z-Image Base: preview/sample settings in LoRA Config screen

Step 3: Run Z-Image Base LoRA ComfyUI Inference

Click Queue/Run — the SaveImage node writes results to your ComfyUI output folder automatically.

Quick checklist:

  • ✅ LoRA is either: downloaded into ComfyUI/models/loras (Option A), or loaded via a direct .safetensors URL (Option B)
  • ✅ Page refreshed after local download (Option A only)
  • ✅ Inference parameters match training sample config (if customized)

If everything above is correct, the inference results here should closely match your training previews.

Troubleshooting Z-Image Base LoRA ComfyUI Inference

Most “training preview vs ComfyUI inference” gaps for Z-Image Base (Tongyi-MAI/Z-Image) come from pipeline-level differences (how the model is loaded, which defaults/scheduler are used, and where/how the LoRA is injected). For AI Toolkit–trained Z-Image Base LoRAs, the most reliable way to get back to training-aligned behavior in ComfyUI is to run generation through RCZimage (the RunComfy pipeline wrapper) and inject the LoRA via lora_path / lora_scale inside that pipeline.

(1) When using Z-Image LoRA with ComfyUI, the message "lora key not loaded" appears.

Why this happens This usually means your LoRA was trained against a different module/key layout than the one your current ComfyUI Z-Image loader expects. With Z-Image, the “same model name” can still involve different key conventions (e.g., original/diffusers-style vs Comfy-specific naming), and that’s enough to trigger “key not loaded”.

How to fix (recommended)

  • Run inference through RCZimage (the workflow’s pipeline wrapper) and load your adapter via lora_path on the RCAITKLoRA / RCZimage path, instead of injecting it through a separate generic Z-Image LoRA loader.
  • Keep the workflow format-consistent: Z-Image Base LoRA trained with AI Toolkit → infer with the AI Toolkit-aligned RCZimage pipeline, so you don’t depend on ComfyUI-side key remapping/converters.

(2) Errors occurred during the VAE phase when using the ZIMAGE LORA loader (model only).

Why this happens Some users report that adding the ZIMAGE LoRA loader (model only) can cause major slowdowns and later failures at the final VAE decode stage, even when the default Z-Image workflow runs fine without the loader.

How to fix (user-confirmed)

  • Remove the ZIMAGE LORA loader (model only) and re-run the default Z-Image workflow path.
  • In this RunComfy workflow, the equivalent “safe baseline” is: use RCZimage + lora_path / lora_scale so LoRA application stays inside the pipeline, avoiding the problematic “model-only LoRA loader” path.

(3) Z-Image Comfy format doesn't match original code

Why this happens Z-Image in ComfyUI can involve a Comfy-specific format (including key naming differences from “original” conventions). If your LoRA was trained with AI Toolkit on one naming/layout convention, and you try to apply it in ComfyUI expecting another, you’ll see partial/failed application and “it runs but looks wrong” behavior.

How to fix (recommended)

  • Don’t mix formats when you’re trying to match training previews. Use RCZimage so inference runs the Z-Image pipeline in the same “family” AI Toolkit previews use, and inject the LoRA inside it via lora_path / lora_scale.
  • If you must use a Comfy-format Z-Image stack, ensure your LoRA is in the same format expected by that stack (otherwise keys won’t line up).

(4) Z-Image oom using lora

Why this happens Z-Image + LoRA can push VRAM over the edge depending on precision/quantization, resolution, and loader path. Some reports mention OOM on 12GB VRAM setups when combining LoRA with lower-precision modes.

How to fix (safe baseline)

  • Validate your baseline first: run Z-Image Base without LoRA at your target resolution.
  • Then add the LoRA via RCZimage (lora_path / lora_scale) and keep comparisons controlled (same width/height, sample_steps, guidance_scale, seed).
  • If you still hit OOM, reduce resolution first (Z-Image is sensitive to pixel count), then consider lowering sample_steps, and only then re-introduce higher settings after stability is confirmed. In RunComfy, you can also switch to a larger machine.

Run Z-Image Base LoRA ComfyUI Inference now

Open the RunComfy Z-Image Base LoRA ComfyUI Inference workflow, set your lora_path, and let RCZimage keep ComfyUI output aligned with your AI Toolkit training previews.

Want More ComfyUI Workflows?

DynamiCrafter | Images to Video

Tested for looping video and frame interpolation. Better than closed-source video gen in certain scenarios

SkyReels-A2 | Multi-Element Video Generation

Combine multi elements into dynamic videos with precision.

FLUX NF4 | Speed Up FLUX ImgGen

FLUX NF4 | Speed Up FLUX ImgGen

Faster image generation and better resource management.

Wan 2.1 | Revolutionary Video Generation

Create incredible videos from text or images with breakthrough AI running on everyday CPUs.

AnimateDiff + Dynamic Prompts | Text to Video

Utilize Dynamic Prompts (Wildcards), Animatediff, and IPAdapter to generate dynamic animations or GIFs.

FLUX Dev ControlNet | Multi-Condition ControlNet

Controlled FLUX Dev image generation with Pose, Depth, Canny, and ReColor

Wan 2.1 FLF2V | First-Last Frame Video

Generate smooth videos from a start and end frame using Wan 2.1 FLF2V.

Wan2.2 VACE Fun | Image to Animated Video

Turn still photos into lifelike animated videos with custom prompts.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2026 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.