logo
RunComfy
  • ComfyUI
  • TrainerNew
  • Models
  • API
  • Pricing
discord logo
TRAINING & INFERENCE
Train LoRA
LoRA Assets
Run LoRA
Generations
DEDICATED ENDPOINTS
Deployments
Requests
ACCOUNT
Usage
API Docs
API Keys
How-tos
AI Toolkit LoRA Training Guides

Z-Image LoRA Training on 8GB VRAM: What Works and What Breaks

This guide shows what actually works for Z-Image LoRA training on 8GB VRAM. It focuses on realistic local constraints, low-memory settings, 512 vs 768 tradeoffs, and how to use an 8GB machine as a smoke-test environment without overpromising quality.

Train Diffusion Models with Ostris AI Toolkit

← Прокрутите горизонтально, чтобы увидеть всю форму →

Ostris AI ToolkitOstrisAI-Toolkit

New Training Job

Job

Model

Use a Hugging Face repo ID (e.g. owner/model-name).
⚠️ full URLs, .safetensors files, and local files are not supported.

Quantization

Target

Save

Training

Datasets

Dataset 1

Sample

Z-Image LoRA Training on 8GB VRAM: What Works and What Breaks

If you are trying Z-Image LoRA training on 8GB VRAM, your intent is usually very concrete:

You have one smaller GPU and want a straight answer: is Z-Image LoRA training actually worth doing on 8GB, or should you stop early and move the real job to a bigger machine?

This guide is for that situation.

By the end, you will know:

  • whether Z-Image LoRA training on 8GB VRAM is realistic
  • what actually works on 8GB cards
  • which settings help fit and speed the run
  • where quality starts to break down
  • when cloud training is the smarter move
For the full base workflow, see the main Z-Image Base LoRA training guide.

Table of contents

  • 1. Is 8GB enough for Z-Image LoRA training?
  • 2. What actually works for Z-Image LoRA training on 8GB VRAM
  • 3. 512 vs 768 vs 1024 on 8GB
  • 4. Best Z-Image LoRA training settings for 8GB VRAM
  • 5. What breaks quality or speed on 8GB
  • 6. When to move the run to RunComfy Cloud
  • 7. Bottom line

1. Is 8GB enough for Z-Image LoRA training?

The short answer is:

yes, but only for constrained runs

On 8GB VRAM, you should think in terms of:

  • smoke tests
  • small, focused character or concept LoRAs
  • conservative resolution
  • memory-saving settings first

You should not think in terms of:

  • large high-resolution final runs
  • maximum-speed iteration
  • "I will just train the same way as a 24GB machine"

That distinction matters a lot.


2. What actually works for Z-Image LoRA training on 8GB VRAM

Z-Image LoRA training on 8GB VRAM is workable, but only within clear limits.

The pattern that actually works is:

  • usable 512px training on 8GB cards
  • workable 768px in some setups, but often slower and less attractive
  • character LoRAs trained with small datasets and conservative settings

The obvious limit is:

  • once the run gets too close to the VRAM ceiling, step time can become awful
  • a technically running job may still be too slow to be useful

So the correct takeaway is:

8GB can work for Z-Image LoRAs, but the successful runs are conservative, not ambitious.

3. 512 vs 768 vs 1024 on 8GB

3.1 512 is the realistic baseline

If you want the highest chance of success, start with 512.

This is the setting that repeatedly shows up as the practical entry point for Z-Image LoRA training 8GB VRAM setups.

3.2 768 is possible, but not always attractive

768 can work with the right offloading setup.

But they also report:

  • slower step times
  • less encouraging iteration speed
  • a bigger risk that the run becomes borderline

3.3 1024 is usually not the local 8GB answer

If your target outcome really needs 1024-quality training, 8GB is usually the wrong environment.

That does not mean the workflow is impossible.

It means you should probably move the full run to a bigger GPU instead of forcing it locally.


4. Best Z-Image LoRA training settings for 8GB VRAM

The low-VRAM recipe that works best is practical, not elegant.

Strong low-VRAM baseline

  • enable Layer Offloading
  • offload the text encoder aggressively
  • use significant transformer offload
  • keep Batch Size = 1
  • turn on Cache Latents

Optional helpers

  • Unload TE can help if your workflow allows it
  • Cache Text Embeddings may help when your captions are stable

What this setup is trying to do

The goal is not elegance.

The goal is to:

  • fit the run
  • keep step time acceptable
  • finish a usable LoRA

That is the right 8GB mindset.


5. What breaks quality or speed on 8GB

5.1 Starting too large

Do not start with:

  • 1024
  • high rank
  • expensive previews
  • wide bucket mixes

That is how Z-Image LoRA training 8GB VRAM runs become time sinks.

5.2 Treating 8GB like a final production box

8GB is great for:

  • proof of concept
  • small LoRAs
  • dataset validation

It is much worse for:

  • high-confidence final quality runs
  • large datasets
  • repeated large-scale checkpoint comparison

5.3 Confusing "it finished" with "it is good"

An 8GB run can finish and still produce only "okay" samples locally, while a stronger environment produces a clearly better final LoRA later.

That is an important distinction.

The local 8GB run may be useful as:

  • an early validation pass
  • not necessarily the best final LoRA

6. When to move the run to RunComfy Cloud

If your goal is a real, reusable result from Z-Image LoRA training 8GB VRAM rather than just proving it can run, RunComfy Cloud AI Toolkit often becomes the better choice once:

  • you want 1024 or above
  • you want faster iteration
  • you want to compare checkpoints without waiting forever
  • your LoRA is commercially important enough that repeated local retries are more expensive than moving the run

The smart pattern is:

  1. do a local 8GB smoke test
  2. confirm the dataset and trigger logic
  3. move the real run to a bigger GPU

That gives you the best of both worlds.

Open it here: RunComfy Cloud AI Toolkit


7. Bottom line

Z-Image LoRA training on 8GB VRAM can work, but only when you treat 8GB as a constrained environment.

What usually works:

  • 512 first
  • heavy offloading
  • batch size 1
  • specific LoRA goals
  • smoke-test mindset

What usually does not work well:

  • high-resolution ambition from the start
  • expensive previews
  • assuming "finished" means "production-ready"

If your target is a specific LoRA you really care about, 8GB is a good place to validate the idea.

It is not always the best place to finish it.

Ready to start training?