ComfyUI > Nodes > ComfyUI-UniRig > MIA: Load Model

ComfyUI Node: MIA: Load Model

Class Name

MIALoadModel

Category
UniRig/MIA
Author
PozzettiAndrea (Account age: 2326days)
Extension
ComfyUI-UniRig
Latest Updated
2026-03-04
Github Stars
0.36K

How to Install ComfyUI-UniRig

Install this extension via the ComfyUI Manager by searching for ComfyUI-UniRig
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-UniRig in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

MIA: Load Model Description

Facilitates fast loading of MIA models for humanoid rigging, optimizing AI art workflows.

MIA: Load Model:

The MIALoadModel node is designed to facilitate the loading of Make-It-Animatable (MIA) models, which are optimized for fast humanoid rigging. This node is particularly beneficial for AI artists working with humanoid characters, as it provides a streamlined process for downloading and managing models that are compatible with Mixamo skeletons. The primary advantage of using this node is its efficiency; it allows for rapid model loading, typically in under a second, making it ideal for projects that require quick iterations. The node ensures that models are downloaded from HuggingFace on their first use, which simplifies the setup process for users. Additionally, it manages model precision and attention configurations, ensuring optimal performance based on the user's hardware capabilities. Overall, the MIALoadModel node is an essential tool for artists looking to integrate advanced rigging capabilities into their workflows with minimal technical overhead.

MIA: Load Model Input Parameters:

precision

The precision parameter determines the numerical precision used by the model during computations. It offers options such as "auto", "bf16", "fp16", and "fp32". The "auto" setting automatically selects the best precision for your GPU, with "bf16" being optimal for Ampere+ architectures, "fp16" for Volta/Turing, and "fp32" for older GPUs. Choosing the right precision can significantly impact the model's performance and memory usage, with lower precision generally offering faster computations and reduced memory requirements. The default value is "auto", which is recommended for most users to ensure optimal performance without manual configuration.

attn_backend

The attn_backend parameter specifies the attention backend to be used by the model. Available options include "auto", "flash_attn", and "sdpa". The "auto" setting selects the best available backend, prioritizing "flash_attn" if the flash-attn package is installed, as it typically offers superior performance. This parameter affects how efficiently the model processes attention mechanisms, which can influence both speed and accuracy. The default value is "auto", allowing the node to automatically choose the most suitable backend based on the user's system capabilities.

MIA: Load Model Output Parameters:

model

The model output parameter represents the loaded MIA model, ready for use in rigging tasks. This output is crucial as it provides the user with a fully configured model that can be directly applied to humanoid characters, facilitating the creation of Mixamo-compatible skeletons. The model is optimized for performance and precision based on the input parameters, ensuring that it meets the specific needs of the user's project. By providing a ready-to-use model, this output streamlines the workflow for AI artists, allowing them to focus on creative tasks rather than technical setup.

MIA: Load Model Usage Tips:

  • To ensure optimal performance, use the "auto" setting for both precision and attn_backend parameters, as this allows the node to automatically configure the best settings based on your hardware.
  • If you experience memory constraints, consider manually setting the precision to "fp16" or "bf16" to reduce memory usage, especially on compatible GPUs.
  • Ensure that the flash-attn package is installed if you wish to leverage the "flash_attn" backend for potentially improved attention mechanism performance.

MIA: Load Model Common Errors and Solutions:

Failed to download MIA models

  • Explanation: This error occurs when the node is unable to download the required MIA models from HuggingFace.
  • Solution: Check your internet connection and ensure that there are no firewall or network restrictions preventing the download. Additionally, verify that you have sufficient disk space for the model files.

Using cached models

  • Explanation: This message indicates that the node is using previously downloaded models from the cache.
  • Solution: No action is required. This is an informational message indicating that the node is optimizing performance by reusing existing models. If you wish to refresh the models, you may need to clear the cache manually.

MIA: Load Model Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-UniRig
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

MIA: Load Model