logo
RunComfy
  • Models
  • ComfyUI
  • TrainerNew
  • API
  • Pricing
discord logo
ComfyUI>Workflows>Z-Image Finetuned Models Collection | Multi-Style Generator

Z-Image Finetuned Models Collection | Multi-Style Generator

Workflow Name: RunComfy/Z-Image-Finetuned-Models
Workflow ID: 0000...1324
With this workflow, you can explore a collection of specialized model variations optimized for different visual themes and artistic styles. Generate realistic portraits, cinematic shots, or anime-inspired imagery with fine control over detail and tone. The workflow streamlines testing and comparison of finetuned models for efficient experimentation. Integration of optimized UNet loaders and CFG normalization enhances visual consistency. LoRA options allow precise style blending. Perfect for artists and AI explorers seeking reliable, high-quality outputs. Unlock consistent, beautifully detailed visuals across multiple finetuned checkpoints.

Z-Image Finetuned Models: multi‑style, high‑quality image generation in ComfyUI

This workflow assembles Z-Image-Turbo and a rotating set of Z-Image finetuned models into a single, production‑ready ComfyUI graph. It is designed to compare styles side by side, keep prompt behavior consistent, and produce sharp, coherent results with minimal steps. Under the hood it combines optimized UNet loading, CFG normalization, AuraFlow‑compatible sampling, and optional LoRA injection so you can explore realism, cinematic portraiture, dark fantasy and anime‑inspired looks without re‑wiring your canvas.

Z-Image Finetuned Models is ideal for artists, prompt engineers, and model explorers who want a fast way to evaluate multiple checkpoints and LoRAs while staying within one consistent pipeline. Enter one prompt, render four variations from different Z-Image finetunes, and quickly lock in the style that best matches your brief.

Key models in Comfyui Z-Image Finetuned Models workflow

  • Tongyi‑MAI Z‑Image‑Turbo. A 6B‑parameter Single‑Stream Diffusion Transformer distilled for few‑step, photoreal text‑to‑image with strong instruction adherence and bilingual text rendering. Official weights and usage notes are on the model card, with the tech report and distillation methods detailed on arXiv and in the project repo. Model • Paper • Decoupled‑DMD • DMDR • GitHub • Diffusers pipeline

  • BEYOND REALITY Z‑Image (community finetune). A photorealistic‑leaning Z‑Image checkpoint that emphasizes glossy textures, crisp edges, and stylized finishing, suitable for portraits and product‑like compositions. Model

  • Z‑Image‑Turbo‑Realism LoRA (example LoRA used in this workflow’s LoRA lane). A lightweight adapter that pushes ultra‑realistic rendering while preserving base Z‑Image‑Turbo prompt alignment; loadable without replacing your base model. Model

  • AuraFlow family (sampling‑compatible reference). The workflow uses AuraFlow‑style sampling hooks for stable few‑step generations; see the pipeline reference for background on AuraFlow schedulers and their design goals. Docs

How to use Comfyui Z-Image Finetuned Models workflow

The graph is organized into four independent generation lanes that share a common text encoder and VAE. Use one prompt to drive all lanes, then compare results saved from each branch.

  • General Model

    • The shared setup loads the text encoder and VAE. Enter your description in the positive CLIPTextEncode (#75) and add optional constraints to the negative CLIPTextEncode (#74). This keeps conditioning identical across branches so you can fairly judge how each finetune behaves. The VAELoader (#21) provides the decoder used by all lanes to turn latents back into images.
  • Z‑Image (Base Turbo)

    • This lane runs the official Z‑Image‑Turbo UNet via UNETLoader (#100) and patches it with ModelSamplingAuraFlow (#76) for few‑step stability. CFGNorm (#67) standardizes classifier‑free guidance behavior so the sampler’s contrast and detail stay predictable across prompts. An EmptyLatentImage (#19) defines the canvas size, then KSampler (#78) generates latents which are decoded by VAEDecode (#79) and written by SaveImage (#102). Use this branch as your baseline when evaluating other Z-Image Finetuned Models.
  • Z‑Image‑Turbo + Realism LoRA

    • This lane injects a style adapter with LoraLoaderModelOnly (#106) on top of the base UNETLoader (#82). ModelSamplingAuraFlow (#84) and CFGNorm (#64) keep outputs crisp while the LoRA pushes realism without overwhelming subject matter. Define resolution with EmptyLatentImage (#71), generate with KSampler (#85), decode via VAEDecode (#86), and save using SaveImage (#103). If a LoRA feels too strong, reduce its weight here rather than over‑editing your prompt.
  • BEYOND REALITY finetune

    • This path swaps in a community checkpoint with UNETLoader (#88) to deliver a stylized, high‑contrast look. CFGNorm (#66) tames guidance so the visual signature stays clean when you change samplers or steps. Set your target size in EmptyLatentImage (#72), render with KSampler (#89), decode VAEDecode (#90), and save via SaveImage (#104). Use the same prompt as the base lane to see how this finetune interprets composition and lighting.
  • Red Tide Dark Beast AIO finetune

    • A dark‑fantasy oriented checkpoint is loaded with CheckpointLoaderSimple (#92), then normalized by CFGNorm (#65). This lane leans into moody color palettes and heavier micro‑contrast while maintaining good prompt compliance. Choose your frame in EmptyLatentImage (#73), generate with KSampler (#93), decode with VAEDecode (#94), and export from SaveImage (#105). It is a practical way to test grittier aesthetics within the same Z-Image Finetuned Models setup.

Key nodes in Comfyui Z-Image Finetuned Models workflow

  • ModelSamplingAuraFlow (#76, #84)

    • Purpose: patches the model to use an AuraFlow‑compatible sampling path that is stable at very low step counts. The shift control subtly adjusts sampling trajectories; treat it as a finesse dial that interacts with your sampler choice and step budget. For best comparability across lanes, keep the same sampler and adjust only one variable (e.g., shift or LoRA weight) per test. Reference: AuraFlow pipeline background and scheduling notes. Docs
  • CFGNorm (#64, #65, #66, #67)

    • Purpose: normalizes classifier‑free guidance so contrast and detail do not swing wildly when you change models, steps, or schedulers. Increase its strength if highlights wash out or textures feel inconsistent between lanes; reduce it if images start to look overly compressed. Keep it similar across branches when you want a clean A/B of Z-Image Finetuned Models.
  • LoraLoaderModelOnly (#106)

    • Purpose: injects a LoRA adapter directly into the loaded UNet without altering the base checkpoint. The strength parameter controls stylistic impact; lower values preserve base realism while higher values impose the LoRA’s look. If a LoRA overpowers faces or typography, reduce its weight first, then fine‑tune prompt phrasing.
  • KSampler (#78, #85, #89, #93)

    • Purpose: runs the actual diffusion loop. Choose a sampler and scheduler that pair well with few‑step distillations; many users prefer Euler‑style samplers with uniform or multistep schedulers for Turbo‑class models. Keep seeds fixed when comparing lanes, and change only one variable at a time to understand how each finetune behaves.

Optional extras

  • Start with one descriptive paragraph‑style prompt and reuse it across all lanes to judge differences among Z-Image Finetuned Models; iterate style words only after you pick a favorite branch.
  • For Turbo‑class models, very low or even zero CFG often yields the cleanest results; use the negative prompt only when you must exclude specific elements.
  • Maintain the same resolution, sampler, and seed when doing A/B tests; change LoRA weight or shift in small increments to isolate cause and effect.
  • Each branch writes its own output; the four SaveImage nodes are labeled uniquely so you can compare and curate quickly.

Links for further reading:

  • Z‑Image‑Turbo model card: Tongyi-MAI/Z-Image-Turbo
  • Technical report and methods: Z‑Image • Decoupled‑DMD • DMDR
  • Project repository: Tongyi‑MAI/Z‑Image
  • Example finetune: Nurburgring/BEYOND_REALITY_Z_IMAGE
  • Example LoRA: Z‑Image‑Turbo‑Realism‑LoRA

Acknowledgements

This workflow implements and builds upon the following works and resources. We gratefully acknowledge HuggingFace models for the article for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.

Resources

  • HuggingFace models:
    • Beyond Reality
    • Dark Beast
    • Realism

Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.

Want More ComfyUI Workflows?

Flux Consistent Characters | Input Image

Flux Consistent Characters | Input Image

Create consistent characters and ensure they look uniform using your images.

Consistent Character Creator

Create consistent, high-resolution character designs from multiple angles with full control over emotions, lighting, and environments.

Portrait Master | Text to Portrait

Portrait Master | Text to Portrait

Use the Portrait Master for greater control over portrait creations without relying on complex prompts.

IPAdapter V1 FaceID Plus | Consistent Characters

IPAdapter V1 FaceID Plus | Consistent Characters

Leverage IPAdapter FaceID Plus V2 model to create consistent characters.

DreamO | Unified Multi-Task Image Customization Framework

Perform identity, style, try-on, and multi-condition image generation from 1–3 references

Epic CineFX | CogVideoX, ControlNet, and Live Portrait Workflow

Turn simple footage into epic film scenes with CogVideoX, ControlNet, and Live Portrait.

Stable Diffusion 3 (SD3) | Text to Image

Stable Diffusion 3 (SD3) | Text to Image

Integrate Stable Diffusion 3 medium into your workflow to produce exceptional AI art.

PMRF Ultra Fast Upscaler | Low VRAM ComfyUI

Ultra fast PMRF upscaler! 3.79s on medium machine. 2x scale.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.