ComfyUI > Nodes > Comfyui-ZiT-Lora-loader > Z-Image Turbo LoRA Stack

ComfyUI Node: Z-Image Turbo LoRA Stack

Class Name

ZImageTurboLoraStack

Category
loaders/Z-Image
Author
capitan01R (Account age: 86days)
Extension
Comfyui-ZiT-Lora-loader
Latest Updated
2026-03-21
Github Stars
0.03K

How to Install Comfyui-ZiT-Lora-loader

Install this extension via the ComfyUI Manager by searching for Comfyui-ZiT-Lora-loader
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfyui-ZiT-Lora-loader in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Z-Image Turbo LoRA Stack Description

Enhances AI models by integrating multiple LoRA modules for dynamic image transformation.

Z-Image Turbo LoRA Stack:

ZImageTurboLoraStack is a sophisticated node designed to enhance the capabilities of AI models by integrating multiple LoRA (Low-Rank Adaptation) modules into a single model stack. This node is particularly beneficial for users working with the Lumina2 architecture, as it allows for the dynamic application of LoRA modules to modify and improve model performance. By stacking these modules, you can achieve more nuanced and powerful transformations in your AI-generated images. The node's primary function is to facilitate the seamless integration of LoRA modules, ensuring that each module is applied with the appropriate strength and configuration. This flexibility allows for a high degree of customization, enabling you to fine-tune the model's behavior to suit specific artistic needs or project requirements.

Z-Image Turbo LoRA Stack Input Parameters:

model

The model parameter is the base AI model to which the LoRA modules will be applied. It is crucial that this model is compatible with the Lumina2 architecture, as the node is specifically designed to work with this type of model. The model serves as the foundation upon which the LoRA modules are stacked, and its characteristics will influence the final output.

lora_<i>

Each lora_parameter represents a specific LoRA module to be applied in the stack, where`` is a placeholder for the slot number (e.g., lora_1, lora_2). The parameter specifies the name of the LoRA module to be loaded. If set to "None", the slot is ignored. This parameter allows you to select which LoRA modules to apply, providing flexibility in customizing the model's behavior.

strength_<i>

The strength_`` parameter determines the intensity with which the corresponding LoRA module is applied. A value of 1.0 applies the module at full strength, while lower values reduce its impact. This parameter is crucial for balancing the influence of each LoRA module, allowing for subtle or pronounced modifications to the model's output.

enabled_<i>

The enabled_`` parameter is a boolean that indicates whether the corresponding LoRA module is active. If set to True, the module is applied; if False, it is skipped. This parameter provides a simple way to toggle the application of specific LoRA modules without removing them from the configuration.

fuse_qkv_<i>

The fuse_qkv_`` parameter is a boolean that determines whether the query, key, and value (QKV) components of the LoRA module should be fused. Fusing these components can optimize the model's performance by reducing computational complexity. This parameter is particularly useful for advanced users looking to enhance model efficiency.

Z-Image Turbo LoRA Stack Output Parameters:

current

The current parameter is the resulting model after all specified LoRA modules have been applied. This output model incorporates the modifications introduced by the LoRA stack, reflecting the cumulative effects of the applied modules. It serves as the final product that can be used for generating AI art with enhanced features and capabilities.

Z-Image Turbo LoRA Stack Usage Tips:

  • Ensure that the base model is compatible with the Lumina2 architecture to avoid compatibility issues and maximize the effectiveness of the LoRA stack.
  • Experiment with different strength_`` values to find the optimal balance for your artistic needs, as varying the strength can significantly alter the model's output.
  • Use the enabled_`` parameter to quickly test different combinations of LoRA modules without having to reconfigure the entire stack.

Z-Image Turbo LoRA Stack Common Errors and Solutions:

Model is <model_type>, not Lumina2.

  • Explanation: This error occurs when the base model provided is not of the Lumina2 architecture, which is required for the node to function correctly.
  • Solution: Verify that the model you are using is compatible with the Lumina2 architecture. If not, switch to a compatible model to resolve this issue.

Slot <i>: <name> (0 keys)

  • Explanation: This warning indicates that the specified LoRA module could not be loaded, possibly due to an incorrect name or missing file.
  • Solution: Check the name of the LoRA module and ensure that the corresponding file exists in the correct directory. Correct any discrepancies to ensure the module loads properly.

Z-Image Turbo LoRA Stack Related Nodes

Go back to the extension to check out more related nodes.
Comfyui-ZiT-Lora-loader
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Z-Image Turbo LoRA Stack