Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates loading various neural network model formats for AI artists, supporting State Dict, TorchScript, PyTorch, ONNX, TorchScript Mobile, and Quantized models with unified interface and optimizer loading option.
The NntLoadModel
node is designed to facilitate the loading of various types of neural network models, making it an essential tool for AI artists who work with different model formats. This node supports loading models in several formats, including State Dict, TorchScript, PyTorch Model, ONNX, TorchScript Mobile, and Quantized models. By providing a unified interface for loading these diverse model types, NntLoadModel
simplifies the process of integrating pre-trained models into your workflow. It also offers the option to load associated optimizers, which can be particularly beneficial for those looking to continue training or fine-tuning models. The node's ability to handle multiple formats ensures flexibility and adaptability, allowing you to work with a wide range of models without needing to worry about the underlying technical details.
The filename
parameter specifies the name of the file containing the model you wish to load. This parameter is crucial as it directs the node to the correct file within the specified directory. There are no specific minimum or maximum values for this parameter, but it must be a valid string representing a file name.
The directory
parameter indicates the location where the model file is stored. This is important for the node to locate and access the model file. Like filename
, this parameter should be a valid directory path string.
The load_format
parameter determines the format in which the model is saved. It supports options such as "State Dict," "TorchScript," "PyTorch Model," "ONNX," "TorchScript Mobile," and "Quantized." This parameter is essential as it guides the node on how to correctly interpret and load the model file. There are no default values, and you must specify one of the supported formats.
The load_optimizer
parameter is a boolean option that specifies whether to load the optimizer state along with the model. If set to "True," the node will attempt to load the optimizer state if it is available in the model file. This can be useful for resuming training from a saved state. The default value is typically "False."
The model
output parameter represents the loaded neural network model. This output is crucial as it provides you with the model object that can be used for inference, further training, or analysis. The model's structure and weights are restored based on the specified load format.
The status_msg
output parameter provides a message indicating the success or failure of the model loading process. It offers insights into what was loaded and from where, helping you verify that the correct model and optimizer (if applicable) have been loaded successfully.
filename
and directory
parameters are correctly specified to avoid file not found errors.load_format
based on the model file you are working with to ensure successful loading.load_optimizer
to "True" to load the optimizer state, if available.model_state_dict
key.<file_path>
<file_path>
load_optimizer
was set to "False" or the optimizer state was not available.load_optimizer
setting and ensure the optimizer state is present in the file.RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.