ComfyUI > Nodes > ComfyUI_LucidFlux > LucidFlux_SM_Cond

ComfyUI Node: LucidFlux_SM_Cond

Class Name

LucidFlux_SM_Cond

Category
LucidFlux_SM
Author
smthemex (Account age: 894days)
Extension
ComfyUI_LucidFlux
Latest Updated
2025-12-10
Github Stars
0.06K

How to Install ComfyUI_LucidFlux

Install this extension via the ComfyUI Manager by searching for ComfyUI_LucidFlux
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI_LucidFlux in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

LucidFlux_SM_Cond Description

Enhance AI models with LoRA conditioning for tailored adaptations.

LucidFlux_SM_Cond:

The LucidFlux_SM_Cond node is designed to enhance the capabilities of AI models by integrating additional conditioning through the use of LoRA (Low-Rank Adaptation) models. This node allows you to apply up to two LoRA models to a base model, providing a flexible way to fine-tune and adapt the model's behavior to specific tasks or datasets. By adjusting the influence of each LoRA model through scaling factors, you can control the degree to which these adaptations affect the base model, enabling a tailored approach to model conditioning. This node is particularly beneficial for artists and developers looking to customize AI models for unique creative outputs or specialized applications.

LucidFlux_SM_Cond Input Parameters:

model

This parameter represents the base model to which the LoRA models will be applied. It serves as the foundation for the conditioning process, and the selected LoRA models will modify its behavior based on their respective influences.

lora1

This parameter allows you to select the first LoRA model to apply to the base model. The options include "none" or any available LoRA models listed in the specified directory. Choosing "none" means no first LoRA model will be applied. This parameter is crucial for introducing specific adaptations to the base model.

lora2

Similar to lora1, this parameter lets you choose a second LoRA model to apply. It provides additional flexibility by allowing a second layer of adaptation, which can be used in conjunction with the first LoRA model or independently. The options are the same as for lora1.

scale1

This parameter is a floating-point value that determines the influence of the first LoRA model on the base model. It ranges from 0.0 to 1.0, with a default value of 1.0. A higher value increases the impact of the LoRA model, while a lower value reduces it, allowing for fine-tuning of the model's behavior.

scale2

This parameter functions like scale1 but applies to the second LoRA model. It also ranges from 0.0 to 1.0, with a default value of 1.0. Adjusting this value controls the extent to which the second LoRA model influences the base model, providing additional customization options.

LucidFlux_SM_Cond Output Parameters:

LucidFlux_SM

The output of this node is a conditioned model that incorporates the influences of the selected LoRA models, adjusted by their respective scaling factors. This output model is ready for use in further processing or deployment, offering enhanced capabilities tailored to specific tasks or creative goals.

LucidFlux_SM_Cond Usage Tips:

  • Experiment with different combinations of lora1 and lora2 to discover unique model behaviors and creative outputs.
  • Use the scale1 and scale2 parameters to fine-tune the influence of each LoRA model, starting with small adjustments to observe their effects.
  • If unsure about the impact of a LoRA model, begin with a scale value of 0.5 to balance the base model's original behavior with the LoRA's adaptations.

LucidFlux_SM_Cond Common Errors and Solutions:

"need LucidFlux"

  • Explanation: This error occurs when the base model is not provided or recognized.
  • Solution: Ensure that a valid base model is selected and properly loaded before executing the node.

"need swinir or diffbir_v2 model"

  • Explanation: This error indicates that a required model for processing is missing.
  • Solution: Verify that the necessary models are available and correctly specified in the input parameters.

LucidFlux_SM_Cond Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI_LucidFlux
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.