ComfyUI Node: SID LLM Local

Class Name

SID_LLM_Local

Category
SID Photography Toolkit/LLM Providers
Author
slahiri (Account age: 5290days)
Extension
ComfyUI-AI-Photography-Toolkit
Latest Updated
2025-12-21
Github Stars
0.05K

How to Install ComfyUI-AI-Photography-Toolkit

Install this extension via the ComfyUI Manager by searching for ComfyUI-AI-Photography-Toolkit
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-AI-Photography-Toolkit in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

SID LLM Local Description

The SID_LLM_Local node enables local vision-language model use with VRAM management and caching.

SID LLM Local:

The SID_LLM_Local node is a powerful component of the ComfyUI-AI-Photography-Toolkit, designed to provide a unified local vision-language model experience without the need for an API. This node supports a variety of model families, including QwenVL, Florence-2, Moondream2, SmolVLM, and Phi-3.5-Vision, each offering unique capabilities such as fast captioning, efficiency, and high-quality outputs. One of the standout features of this node is its automatic VRAM management, which ensures optimal performance by adjusting resource usage based on available VRAM. Additionally, it offers multiple quantization options and model caching to enhance inference speed, along with image caching for efficient repeated analyses. This makes the SID_LLM_Local node an ideal choice for users looking to leverage advanced vision-language models locally, providing flexibility and efficiency in AI photography tasks.

SID LLM Local Input Parameters:

LLM_MODEL_Type

The LLM_MODEL_Type input parameter is a custom type created for ComfyUI, representing the specific vision-language model to be used by the node. This parameter allows you to select from the supported model families, such as QwenVL, Florence-2, Moondream2, SmolVLM, and Phi-3.5-Vision. Each model family offers different capabilities, and the choice of model can significantly impact the node's execution and results. For instance, selecting a model with a larger parameter size may provide higher quality outputs but require more VRAM. The parameter does not have explicit minimum, maximum, or default values, as it depends on the available models and your specific requirements.

SID LLM Local Output Parameters:

Model Output

The Model Output parameter provides the results generated by the selected vision-language model. This output can include image captions, descriptions, or other relevant data depending on the model's capabilities and the input provided. The importance of this output lies in its ability to deliver high-quality, contextually relevant information that can be used for various AI photography tasks. Understanding the output requires familiarity with the specific model's strengths, such as fast captioning or high-quality vision analysis, which can guide you in interpreting the results effectively.

SID LLM Local Usage Tips:

  • To optimize performance, choose a model that aligns with your VRAM capacity. The node's automatic VRAM management will help, but selecting a model that fits your hardware can enhance efficiency.
  • Utilize model caching for tasks that require repeated analyses, as this can significantly reduce inference time and improve workflow speed.

SID LLM Local Common Errors and Solutions:

"Insufficient VRAM for selected model"

  • Explanation: This error occurs when the selected model requires more VRAM than is available on your system.
  • Solution: Try selecting a smaller model or reducing the VRAM usage by closing other applications or processes that may be consuming resources.

"Model not found in local cache"

  • Explanation: This error indicates that the selected model is not available in the local cache, possibly due to a misconfiguration or missing files.
  • Solution: Ensure that the model files are correctly placed in the designated cache directory and that the model name is correctly specified in the input parameters.

SID LLM Local Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-AI-Photography-Toolkit
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.