ComfyUI > Nodes > ComfyUI_FL-HeartMuLa > FL HeartMuLa Model Loader

ComfyUI Node: FL HeartMuLa Model Loader

Class Name

FL_HeartMuLa_ModelLoader

Category
🎵FL HeartMuLa
Author
filliptm (Account age: 0days)
Extension
ComfyUI_FL-HeartMuLa
Latest Updated
2026-03-20
Github Stars
0.12K

How to Install ComfyUI_FL-HeartMuLa

Install this extension via the ComfyUI Manager by searching for ComfyUI_FL-HeartMuLa
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI_FL-HeartMuLa in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

FL HeartMuLa Model Loader Description

Facilitates HeartMuLa AI music model loading with configurable options for precision and memory.

FL HeartMuLa Model Loader:

The FL_HeartMuLa_ModelLoader node is designed to facilitate the loading of the HeartMuLa AI music generation model, offering a seamless experience for users interested in generating music using AI. This node is particularly beneficial for those who wish to leverage the HeartMuLa model's capabilities without delving into the technical intricacies of model management. It provides configurable options that allow you to select different model variants and memory modes, ensuring that the model can be tailored to your specific needs and system capabilities. The node is equipped to handle different levels of precision and can utilize 4-bit quantization to optimize VRAM usage, making it versatile for various hardware configurations. By abstracting the complexities of model loading, the FL_HeartMuLa_ModelLoader node empowers you to focus on the creative aspects of music generation, ensuring a balance between quality and performance.

FL HeartMuLa Model Loader Input Parameters:

model_version

The model_version parameter specifies which variant of the HeartMuLa model you wish to load. Currently, the available option is the "3B" model, which offers a good balance of quality and speed. The "7B" model is mentioned but not yet released, so attempting to load it will result in an error. This parameter is crucial as it determines the model's capabilities and performance characteristics.

memory_mode

The memory_mode parameter allows you to select the memory usage mode for the model. Options include "auto", "normal", "low", and "ultra". The "auto" mode automatically detects the best memory mode based on your system's available VRAM and the model's requirements. Choosing the right memory mode can significantly impact the model's performance and VRAM usage, especially on systems with limited resources.

precision

The precision parameter defines the precision mode for the model's computations. While the exact options are not detailed in the context, this parameter typically affects the model's performance and accuracy, with higher precision modes offering better accuracy at the cost of increased computational demand.

use_4bit

The use_4bit parameter is a boolean option that, when enabled, allows the model to use 4-bit quantization. This can reduce VRAM usage, making it a valuable option for systems with limited memory resources. However, it may also impact the model's performance and output quality, so it should be used judiciously based on your system's capabilities and the desired output quality.

force_reload

The force_reload parameter is a boolean option that, when set to true, forces the model to reload even if it is already cached. This can be useful if you suspect that the cached model is outdated or if you have made changes to the model files that need to be reflected in the loaded model.

FL HeartMuLa Model Loader Output Parameters:

model

The model output parameter provides the loaded HeartMuLa model information as a dictionary. This includes details such as the model's pipeline, version, device, data type, sample rate, maximum duration, and whether 4-bit quantization is used. This output is essential for subsequent nodes in the music generation pipeline, as it contains all the necessary information to utilize the model effectively.

FL HeartMuLa Model Loader Usage Tips:

  • To optimize performance on systems with limited VRAM, consider using the use_4bit option and setting the memory_mode to "low" or "ultra". This can help reduce memory usage while still allowing you to generate music.
  • If you are unsure about which memory mode to use, set memory_mode to "auto" to let the node automatically select the best option based on your system's available VRAM.

FL HeartMuLa Model Loader Common Errors and Solutions:

Model files not found!

  • Explanation: This error occurs when the model files are not found in the expected location on your system.
  • Solution: The model will be automatically downloaded from HuggingFace. Ensure that your system has an active internet connection and sufficient storage space in the ComfyUI/models/heartmula/ directory.

7B Model Coming Soon!

  • Explanation: This error is raised when attempting to load the "7B" model, which is not yet released.
  • Solution: Use the "3B" model instead, as it is currently available and offers excellent quality. Keep an eye out for updates regarding the release of the "7B" model.

FL HeartMuLa Model Loader Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI_FL-HeartMuLa
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.