ComfyUI > Nodes > ComfyUI > Train LoRA

ComfyUI Node: Train LoRA

Class Name

TrainLoraNode

Category
training
Author
ComfyAnonymous (Account age: 763days)
Extension
ComfyUI
Latest Updated
2026-05-13
Github Stars
112.77K

How to Install ComfyUI

Install this extension via the ComfyUI Manager by searching for ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Train LoRA Description

Facilitates training LoRA models efficiently with fewer parameters, supporting various configurations for flexible training processes.

Train LoRA:

The TrainLoraNode is designed to facilitate the training of Low-Rank Adaptation (LoRA) models, which are used to fine-tune large pre-trained models efficiently. This node is particularly beneficial for AI artists and developers who want to customize models for specific tasks without the need for extensive computational resources. By leveraging LoRA, you can adjust the model's weights in a way that requires significantly fewer parameters than traditional fine-tuning methods, making it both cost-effective and faster. The node supports various configurations, including the use of existing LoRA weights, gradient checkpointing, and different optimization algorithms, allowing for flexible and efficient training processes. Its experimental nature indicates that it is a cutting-edge tool, potentially offering innovative features that are still being refined.

Train LoRA Input Parameters:

model

This parameter represents the pre-trained model that you wish to fine-tune using LoRA. It is crucial as it serves as the base model upon which the LoRA weights will be applied. The choice of model can significantly impact the quality and specificity of the fine-tuning results.

lora

The lora parameter refers to the Low-Rank Adaptation weights that will be used to adjust the base model. These weights are essential for the fine-tuning process, allowing for efficient adaptation of the model to new tasks with minimal computational overhead.

strength_model

This parameter controls the intensity of the LoRA weights applied to the model. A higher value increases the influence of the LoRA weights, potentially leading to more significant changes in the model's behavior. It is important to balance this parameter to achieve the desired level of fine-tuning without overfitting.

bypass

The bypass parameter is a boolean flag that determines whether to use a bypass mode during training. When set to true, it allows for a different method of loading LoRA weights, which can be useful in specific scenarios where traditional loading methods may not be optimal.

Train LoRA Output Parameters:

LORA_MODEL

The LORA_MODEL output provides the fine-tuned model with the applied LoRA weights. This output is crucial as it represents the final product of the training process, ready for deployment or further testing.

LOSS_MAP

The LOSS_MAP output offers a history of the loss values recorded during the training process. This information is valuable for understanding the training dynamics and assessing the effectiveness of the fine-tuning process.

steps

This output indicates the total number of training steps completed during the LoRA training process. It provides insight into the duration and extent of the training, which can be useful for evaluating the computational resources used.

Train LoRA Usage Tips:

  • Experiment with different strength_model values to find the optimal balance between model adaptation and overfitting.
  • Utilize the bypass mode if you encounter issues with traditional LoRA weight loading methods, as it may offer a more suitable alternative.
  • Regularly monitor the LOSS_MAP output to ensure that the training process is progressing as expected and make adjustments to the training parameters if necessary.

Train LoRA Common Errors and Solutions:

Error: "LoRA weights not found"

  • Explanation: This error occurs when the specified LoRA weights cannot be located or loaded.
  • Solution: Ensure that the correct path to the LoRA weights is provided and that the file is accessible.

Error: "Model not compatible with LoRA"

  • Explanation: This error indicates that the chosen model is not suitable for LoRA fine-tuning.
  • Solution: Verify that the model supports LoRA and consider using a different model that is compatible with LoRA techniques.

Error: "Gradient checkpointing failed"

  • Explanation: This error arises when there is an issue with setting up gradient checkpointing during training.
  • Solution: Check the configuration of gradient checkpointing, including the depth and modules being patched, and ensure that all dependencies are correctly installed.

Train LoRA Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Train LoRA