AI Toolkit LoRA Training Guides

AI Toolkit Inference in ComfyUI: Get Results That Match Your Training Samples (Training Preview‑Match Workflows)

ComfyUI often looks different because its inference pipeline isn’t the one AI Toolkit uses to render training samples (preview images/videos). Choose the workflow for your base model, load your AI Toolkit‑trained LoRA, and generate preview‑matching results in ComfyUI.

Train Diffusion Models with Ostris AI Toolkit

If you trained a LoRA with Ostris AI Toolkit and ran into issues like:

  • “AI Toolkit LoRA looks different in ComfyUI”
  • “AI Toolkit preview vs ComfyUI inference mismatch”
  • “Why do my training samples look better than ComfyUI?”
  • “AI Toolkit-trained LoRA not working in ComfyUI”

…this guide is for you.

This guide is specifically about running AI Toolkit‑trained LoRAs inside ComfyUI, while keeping results consistent with what you saw in AI Toolkit training samples / previews.

Looking for non‑ComfyUI inference (Playground / API)?
See: AI Toolkit Inference: Get Results That Match Your Training Samples (Playground/API)

The real reason AI Toolkit samples don’t match ComfyUI

Your ComfyUI graph is not the same inference pipeline that AI Toolkit uses to render training samples.

And because the mismatch is at the pipeline level, simply copying visible parameters (prompt, seed, steps, CFG/guidance, resolution) usually does not reproduce the same look.

What “pipeline mismatch” typically means in practice:

  • Different sampler / scheduler semantics

    Many modern AI Toolkit targets rely on flow‑matching style sampling (and model‑specific scheduler behavior). A “similar” sampler in ComfyUI is often not equivalent.

  • Hidden model-family knobs that don’t show up as “steps/CFG”

    Example: model-family sampling patches / shift parameters (common in newer architectures).

  • Different LoRA application behavior

    “Load LoRA” can mean different things depending on the architecture (adapter vs patched weights; where the injection happens).

  • Key / module name mismatches

    The LoRA can “load” but silently not bind to the expected modules, so its effect is reduced or missing.

  • Base model / variant drift

    Even small differences (dev vs turbo, 4B vs 14B, de‑turbo vs turbo, quantization differences) can change the outcome dramatically.

Bottom line: If you care about matching AI Toolkit previews, you need to match the full inference pipeline, not just the settings you can see.


What RunComfy did to fix this (for ComfyUI users)

RunComfy took the inference pipeline used for AI Toolkit training samples (the “Samples / Preview” images/videos) and matched ComfyUI inference to it — per base model.

Then we packaged that aligned logic into:

  • ComfyUI nodes (so the “AI Toolkit preview pipeline behavior” is available inside a graph)
  • Training Preview‑Match workflows (ready‑to‑run ComfyUI graphs, one per base model)

The result:

  • One base model = one ComfyUI workflow, open the workflow that matches your base model
  • Drop in your AI Toolkit‑trained LoRA (.safetensors)
  • Generate images/videos with results that match your AI Toolkit training samples / previews far more reliably than hand‑tuning a random ComfyUI graph

Training Preview‑Match workflow library (one base model = one workflow)

Pick the workflow that matches the base model you trained on.

First release supports the models below. More workflows will be added until the library fully covers all AI Toolkit base models.


Troubleshooting (AI Toolkit LoRA in ComfyUI)

Below are the most common “AI Toolkit → ComfyUI” problems, and the fastest fix.

“My AI Toolkit training samples look great, but ComfyUI inference looks worse / different.”

Cause: You are not using the AI Toolkit training‑preview pipeline in ComfyUI.

Fix: Use the RunComfy Training Preview‑Match workflow for your base model (one base model = one workflow).

Go to: Training Preview‑Match workflow library


“I matched prompt/seed/steps/CFG, but it still doesn’t match AI Toolkit preview.”

Cause: Parameter matching isn’t enough when the sampler/scheduler + pipeline logic differs.

Fix: Stop trying to “tune into a match” manually. Use the base‑model Training Preview‑Match workflow.

Go to: Training Preview‑Match workflow library


“My AI Toolkit LoRA loads in ComfyUI, but it barely changes anything.”

Cause: Most often one of these:

  • wrong base model workflow (architecture/variant mismatch)
  • LoRA key/module mapping mismatch (LoRA isn’t applied to the modules you think it is)
  • LoRA weight too low to be visible

Fix: Use the Training Preview‑Match workflow for the base model you trained on, then retest at 0.9–1.0.

Go to: Training Preview‑Match workflow library


“Flux/FlowMatch previews look amazing in AI Toolkit, but ComfyUI loses likeness with Euler/DPM samplers.”

Cause: AI Toolkit previews for FLUX‑family targets commonly rely on flow‑matching sampling behavior. Switching samplers (even if steps match) can change structure/likeness.

Fix: Use the FLUX Training Preview‑Match workflow built to match AI Toolkit’s sampling pipeline.

Go to:


“Z‑Image Turbo preview matches in AI Toolkit, but ComfyUI looks degraded.”

Cause: Z‑Image‑family inference is sensitive to model‑specific sampling patches/parameters (easy to miss in custom graphs).

Fix: Use the Z‑Image Training Preview‑Match workflow (Turbo vs De‑Turbo matters).

Go to:


Related guide: matching without ComfyUI (Playground/API)

If you don’t need ComfyUI and simply want the fastest path to training‑preview matching inference (or want to integrate into an app), use:

  • AI Toolkit Inference: Get Results That Match Your Training Samples (Playground/API) → link

Ready to start training?