ComfyUI Node: Lora Add

Class Name

LoraAdd

Category
LoraUtils
Author
lrzjason (Account age: 4210days)
Extension
Comfyui-LoraUtils
Latest Updated
2025-11-13
Github Stars
0.03K

How to Install Comfyui-LoraUtils

Install this extension via the ComfyUI Manager by searching for Comfyui-LoraUtils
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter Comfyui-LoraUtils in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Lora Add Description

Combines two LoRA models, blending styles with adjustable influence for creative AI art.

Lora Add:

The LoraAdd node is designed to facilitate the combination of two LoRA (Low-Rank Adaptation) models, allowing you to merge their capabilities into a single, more versatile model. This node is particularly useful for AI artists who want to blend different styles or features from separate LoRA models into one cohesive output. By adjusting the scaling factors, you can control the influence of each model in the final result, providing a flexible tool for creative experimentation. The primary goal of the LoraAdd node is to enhance the adaptability and functionality of LoRA models by enabling seamless integration and customization, making it an essential tool for those looking to expand their AI art generation capabilities.

Lora Add Input Parameters:

loraA

This parameter represents the first LoRA model you wish to combine. It serves as one of the two primary inputs for the merging process. The model's layers and weights will be adjusted and integrated with those of the second model, loraB, based on the specified scaling factors. There are no specific minimum or maximum values for this parameter, as it is a model input.

loraB

Similar to loraA, this parameter is the second LoRA model to be combined. It works in conjunction with loraA to produce a merged model. The interaction between loraA and loraB is influenced by their respective scaling factors, allowing for a balanced or weighted combination. Like loraA, this parameter does not have specific minimum or maximum values.

alpha_a

This parameter is a scaling factor for loraA, determining its influence in the final merged model. A higher value increases the weight of loraA in the combination, while a lower value reduces it. The default value is 1.0, and it can be adjusted to fine-tune the contribution of loraA to the merged model.

alpha_b

This parameter functions as a scaling factor for loraB, similar to alpha_a for loraA. It controls the extent to which loraB influences the final model. The default value is 1.0, and adjusting it allows you to balance the contributions of both models according to your creative needs.

target_rank

This parameter specifies the target rank for the merged model. It is used to align the ranks of the LoRA layers during the merging process. A value of -1 indicates that the default rank should be used, which is typically determined by the existing ranks of the input models. Adjusting this parameter can help optimize the performance of the merged model.

Lora Add Output Parameters:

merged_lora

The output of the LoraAdd node is a new LoRA model that combines the features and styles of loraA and loraB. This merged model retains the characteristics of both input models, adjusted according to the specified scaling factors. The merged_lora can be used in subsequent AI art generation tasks, offering a unique blend of the original models' capabilities.

Lora Add Usage Tips:

  • Experiment with different values for alpha_a and alpha_b to achieve the desired balance between the two input models. This can help you create a model that best fits your artistic vision.
  • Use the target_rank parameter to optimize the performance of the merged model, especially if you notice any degradation in quality or efficiency.
  • Consider saving the merged model for future use, allowing you to build a library of customized LoRA models tailored to specific styles or projects.

Lora Add Common Errors and Solutions:

"Incompatible LoRA models"

  • Explanation: This error occurs when the input models loraA and loraB have incompatible structures or layers that cannot be merged.
  • Solution: Ensure that both models are compatible in terms of architecture and layer configuration. You may need to adjust the models or select different ones that are more compatible.

"Invalid scaling factor"

  • Explanation: This error is triggered when the scaling factors alpha_a or alpha_b are set to invalid values, such as negative numbers.
  • Solution: Check the scaling factors and ensure they are set to valid, positive numbers. Adjust them to appropriate values to avoid this error.

"Rank alignment failed"

  • Explanation: This error indicates that the ranks of the LoRA layers could not be aligned during the merging process.
  • Solution: Verify the target_rank parameter and ensure it is set correctly. You may need to experiment with different values or check the ranks of the input models to resolve this issue.

Lora Add Related Nodes

Go back to the extension to check out more related nodes.
Comfyui-LoraUtils
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.