ComfyUI Node: LM Studio (Unified)

Class Name

Expo Lmstudio Unified

Category
ComfyExpo/LMStudio
Author
Expo (Account age: 5215days)
Extension
LM Studio Image to Text Node for ComfyUI
Latest Updated
2026-03-11
Github Stars
0.05K

How to Install LM Studio Image to Text Node for ComfyUI

Install this extension via the ComfyUI Manager by searching for LM Studio Image to Text Node for ComfyUI
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter LM Studio Image to Text Node for ComfyUI in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

LM Studio (Unified) Description

Expo Lmstudio Unified integrates local LM models in ComfyUI for diverse text operations.

LM Studio (Unified):

Expo Lmstudio Unified is a versatile node designed to integrate seamlessly with local Language Model (LM) Studio models within the ComfyUI framework. Its primary purpose is to facilitate the interaction between users and sophisticated language models, enabling a wide range of text-based operations such as text generation, image-to-text conversion, and structured output generation. This node acts as a unified interface, streamlining the process of leveraging advanced language models for creative and analytical tasks. By providing a cohesive platform for various language processing functions, Expo Lmstudio Unified enhances the efficiency and effectiveness of AI-driven projects, making it an invaluable tool for AI artists and developers seeking to harness the power of local LLM models.

LM Studio (Unified) Input Parameters:

model_key_to_use

This parameter specifies the key of the model you wish to use. It determines which language model will be employed for processing your requests. If not provided, the default model is used. This choice can significantly impact the quality and style of the generated text, as different models may have varying capabilities and training data.

auto_unload

This boolean parameter indicates whether the model should be automatically unloaded after use. When set to "True," it helps manage system resources by unloading the model after a specified delay, which can be set using the unload_delay parameter. This is particularly useful in environments with limited memory resources.

unload_delay

This parameter sets the time-to-live (TTL) for the model in seconds when auto_unload is enabled. It defines how long the model remains loaded in memory after processing a request. A longer delay might be beneficial if you anticipate frequent requests, while a shorter delay can help conserve resources.

system_prompt

The system prompt is a string that sets the initial context or instructions for the language model. It guides the model's responses and can be used to establish a specific tone or focus for the generated content. Crafting an effective system prompt is crucial for obtaining relevant and coherent outputs.

prompt

This is the main input text or query that you provide to the language model. It serves as the basis for the model's response and can be a question, a statement, or any text requiring further elaboration or transformation by the model.

temperature

The temperature parameter controls the randomness of the model's output. A higher temperature results in more diverse and creative responses, while a lower temperature produces more deterministic and focused outputs. Adjusting this parameter allows you to balance creativity and precision in the generated text.

maxTokens

This parameter defines the maximum number of tokens (words or word pieces) that the model can generate in response to your prompt. It helps manage the length of the output, ensuring that it remains concise or allowing for more detailed responses as needed.

seed

The seed parameter is used to initialize the random number generator for the model's output. By setting a specific seed, you can ensure that the model produces the same output for the same input across different runs, which is useful for reproducibility in experiments.

timeout_seconds

This parameter sets the maximum time allowed for the model to generate a response. If the model does not respond within this timeframe, a timeout error occurs. This helps prevent indefinite waiting periods and ensures timely responses.

debug

A boolean parameter that, when enabled, provides detailed logging information about the model's processing steps. This can be useful for troubleshooting and understanding the model's behavior, especially during development and testing phases.

LM Studio (Unified) Output Parameters:

result

The result parameter contains the content generated by the language model in response to the provided prompt. It is the primary output of the node and reflects the model's interpretation and processing of the input text. The quality and relevance of this output depend on the input parameters and the model's capabilities.

stats_info

This output provides statistical information about the model's response, including the number of tokens generated, the time taken to produce the first token, and the reason for stopping the generation. These metrics offer insights into the model's performance and can help optimize future interactions.

LM Studio (Unified) Usage Tips:

  • To achieve more creative outputs, experiment with higher temperature settings, but if you need precise and consistent results, lower the temperature.
  • Use the system_prompt to set the context effectively, especially when working on tasks that require a specific tone or style.
  • Adjust the maxTokens parameter based on the complexity of the task to ensure the output is neither too brief nor excessively long.

LM Studio (Unified) Common Errors and Solutions:

Error: LM Studio model response timed out after <timeout_seconds> seconds.

  • Explanation: This error occurs when the model takes longer than the specified timeout period to generate a response.
  • Solution: Consider increasing the timeout_seconds parameter to allow more time for the model to process complex requests, or simplify the input prompt to reduce processing time.

Error: Unable to slice content

  • Explanation: This error might occur when attempting to access a portion of the model's response that is not available or improperly formatted.
  • Solution: Ensure that the model's output is correctly formatted and that the maxTokens parameter is set appropriately to capture the desired amount of content.

LM Studio (Unified) Related Nodes

Go back to the extension to check out more related nodes.
LM Studio Image to Text Node for ComfyUI
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.