ComfyUI > Nodes > ComfyUI-Llama > Call LLM Basic

ComfyUI Node: Call LLM Basic

Class Name

Call LLM Basic

Category
LLM
Author
Daniel Lewis (Account age: 4017days)
Extension
ComfyUI-Llama
Latest Updated
2024-06-29
Github Stars
0.07K

How to Install ComfyUI-Llama

Install this extension via the ComfyUI Manager by searching for ComfyUI-Llama
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-Llama in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Call LLM Basic Description

Facilitates text generation using LLMs for creative writing and dialogue with customizable parameters.

Call LLM Basic:

The Call LLM Basic node is designed to facilitate interaction with a language model (LLM) by generating text based on a sequence of input tokens. This node is essential for tasks that require text generation, such as creative writing, dialogue systems, or any application where generating human-like text is beneficial. By leveraging the capabilities of the LLM, this node allows you to produce coherent and contextually relevant text outputs. It provides a range of parameters to fine-tune the generation process, ensuring that the output aligns with your specific needs and preferences. The node's primary function is to serve as an interface between your input data and the language model, enabling you to harness the power of advanced text generation in a straightforward and accessible manner.

Call LLM Basic Input Parameters:

LLM

This parameter represents the language model you are interacting with. It is crucial as it determines the model's behavior and the quality of the generated text. The LLM parameter is typically set to a pre-trained model that has been loaded into the system.

tokens

A sequence of integers representing the input tokens for the language model. These tokens are the starting point for text generation, and their selection significantly impacts the resulting output. The tokens should be chosen based on the context or prompt you wish to expand upon.

top_k

An integer that limits the number of highest probability vocabulary tokens considered during generation. A higher value allows for more diverse outputs, while a lower value makes the output more deterministic. The default is often set to 40.

top_p

A float representing the cumulative probability threshold for token selection. It ensures that only the most probable tokens are considered, balancing diversity and coherence. The default value is typically 0.95.

min_p

A float that sets the minimum probability threshold for token selection. This parameter helps in filtering out low-probability tokens, ensuring that only likely candidates are chosen during generation.

typical_p

A float that influences the typicality of the generated text, ensuring that the output is not only probable but also typical of the training data. This parameter helps in maintaining the naturalness of the text.

temp

A float that controls the randomness of the generation process. Lower values make the output more deterministic, while higher values increase randomness and creativity. The default is usually around 0.8.

repeat_penalty

A float that penalizes the model for repeating tokens, helping to reduce redundancy in the generated text. A typical default value is 1.1.

reset

A boolean indicating whether to reset the model's state before generation. This is useful for ensuring that previous interactions do not influence the current output.

frequency_penalty

A float that penalizes tokens based on their frequency in the generated text, encouraging diversity. This helps in avoiding overuse of common words.

presence_penalty

A float that penalizes tokens that have already appeared in the text, promoting the introduction of new content. This parameter is useful for generating more varied and interesting text.

tfs_z

A float parameter that adjusts the temperature scaling factor, influencing the diversity of the output. It is used to fine-tune the balance between randomness and coherence.

microstat_mode

An integer that sets the mode for micro-statistical adjustments during generation. This parameter allows for fine-grained control over the statistical properties of the output.

microstat_tau

A float that adjusts the micro-statistical temperature, affecting the variability of the generated text. It provides additional control over the randomness of the output.

microstat_eta

A float that influences the micro-statistical entropy, impacting the diversity of the text. This parameter helps in achieving a balance between coherence and creativity.

penalize_nl

A boolean that determines whether to penalize new lines in the generated text. This is useful for controlling the format and structure of the output.

logits_processor

A custom function or processor that modifies the logits before token selection. This allows for advanced customization of the generation process.

stopping_criteria

A set of conditions that determine when the generation process should stop. This ensures that the output meets specific requirements or constraints.

grammar

A set of rules or guidelines that the generated text should adhere to. This parameter helps in maintaining the grammatical correctness and style of the output.

Call LLM Basic Output Parameters:

generator

The generator is the primary output of the Call LLM Basic node. It represents the sequence of tokens generated by the language model based on the input parameters. This output is crucial as it forms the basis of the text you wish to produce, and its quality and relevance are directly influenced by the input parameters and the model's capabilities.

Call LLM Basic Usage Tips:

  • Experiment with different top_k and top_p values to find the right balance between diversity and coherence for your specific task.
  • Use the temp parameter to control the creativity of the output; lower values for more predictable text and higher values for more creative results.
  • Adjust repeat_penalty, frequency_penalty, and presence_penalty to reduce redundancy and encourage diversity in the generated text.

Call LLM Basic Common Errors and Solutions:

"Invalid token sequence"

  • Explanation: The input tokens may not be valid or properly formatted for the language model.
  • Solution: Ensure that the tokens are correctly encoded and represent a valid sequence for the model.

"Model state not reset"

  • Explanation: The model's state was not reset, leading to unexpected influences from previous interactions.
  • Solution: Set the reset parameter to True to clear the model's state before generating new text.

"Logits processor error"

  • Explanation: There might be an issue with the custom logits processor function.
  • Solution: Verify that the logits processor is correctly implemented and compatible with the model's requirements.

Call LLM Basic Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-Llama
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.

Call LLM Basic