ComfyUI > Nodes > ComfyUI-Llama > Call LLM Advanced

ComfyUI Node: Call LLM Advanced

Class Name

Call LLM Advanced

Category
LLM
Author
Daniel Lewis (Account age: 4017days)
Extension
ComfyUI-Llama
Latest Updated
2024-06-29
Github Stars
0.07K

How to Install ComfyUI-Llama

Install this extension via the ComfyUI Manager by searching for ComfyUI-Llama
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI-Llama in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

Call LLM Advanced Description

Facilitates advanced LLM interactions, allowing customizable text generation with adjustable parameters.

Call LLM Advanced:

The Call LLM Advanced node is designed to facilitate advanced interactions with a Language Learning Model (LLM), enabling you to generate text based on a sequence of input tokens. This node is particularly beneficial for users who require more control over the text generation process, offering a range of parameters to fine-tune the output. By leveraging this node, you can customize the behavior of the LLM to suit specific creative or functional needs, such as adjusting the randomness of the output, penalizing repetitive phrases, or setting criteria for stopping the generation. This flexibility makes it an essential tool for AI artists looking to harness the full potential of language models in their projects.

Call LLM Advanced Input Parameters:

LLM

This parameter represents the Language Learning Model you are interacting with. It is essential for the node to function, as it specifies the model that will process the input tokens and generate the output text.

tokens

A sequence of integers representing the input tokens for the LLM. These tokens are the starting point for the text generation process, and their selection can significantly influence the resulting output.

top_k

An integer that determines the number of highest probability vocabulary tokens to keep for sampling. A higher value allows for more diversity in the output, while a lower value makes the output more deterministic. The default value is typically set to 40.

top_p

A float that sets the cumulative probability threshold for token sampling. It helps in controlling the diversity of the output by only considering tokens that contribute to the top cumulative probability. The default value is usually 0.95.

min_p

A float that specifies the minimum probability threshold for token sampling. Tokens with a probability lower than this threshold will not be considered, ensuring that only sufficiently likely tokens are sampled.

typical_p

A float that adjusts the typicality of the output, balancing between randomness and determinism. It helps in generating outputs that are neither too predictable nor too random.

temp

A float representing the temperature of the sampling process. Higher temperatures result in more random outputs, while lower temperatures make the output more deterministic. The default value is often set to 0.8.

repeat_penalty

A float that penalizes the repetition of tokens in the output. This parameter helps in reducing redundancy and ensuring more varied text generation. The default value is typically 1.1.

reset

A boolean indicating whether to reset the model's state before generating text. This can be useful for ensuring that the output is not influenced by previous interactions.

frequency_penalty

A float that penalizes tokens based on their frequency in the output. This helps in reducing the likelihood of repeating common phrases and encourages more diverse text generation.

presence_penalty

A float that penalizes tokens based on their presence in the output. This encourages the model to introduce new concepts and ideas, enhancing the creativity of the generated text.

tfs_z

A float that adjusts the token frequency scaling, influencing how token frequencies are considered during sampling.

microstat_mode

An integer that sets the mode for microstatistical adjustments, allowing for fine-tuning of the model's behavior during text generation.

microstat_tau

A float that adjusts the tau parameter for microstatistical adjustments, providing additional control over the model's output.

microstat_eta

A float that adjusts the eta parameter for microstatistical adjustments, further refining the model's behavior during text generation.

penalize_nl

A boolean that determines whether to penalize new lines in the output. This can be useful for generating more concise text without unnecessary line breaks.

logits_processor

A custom processor for modifying the logits before sampling. This allows for advanced customization of the text generation process.

stopping_criteria

Criteria that determine when the text generation should stop. This can be based on specific tokens, length, or other conditions, ensuring that the output meets your requirements.

grammar

A set of rules or guidelines that the generated text should adhere to. This can help in producing outputs that are grammatically correct and contextually appropriate.

Call LLM Advanced Output Parameters:

generator

The generator output is an iterable that produces the generated text based on the input tokens and parameters. It allows you to retrieve the text in a controlled manner, enabling further processing or display as needed.

Call LLM Advanced Usage Tips:

  • Experiment with different top_k and top_p values to find the right balance between creativity and coherence in your text outputs.
  • Use the repeat_penalty and presence_penalty parameters to reduce redundancy and encourage diversity in the generated text.
  • Adjust the temperature parameter to control the randomness of the output, with higher values leading to more varied and creative results.

Call LLM Advanced Common Errors and Solutions:

"Invalid token sequence"

  • Explanation: This error occurs when the input tokens are not in a valid format or contain unsupported values.
  • Solution: Ensure that the tokens are a sequence of integers and that they are compatible with the LLM being used.

"Model state reset failed"

  • Explanation: This error indicates that the model's state could not be reset, possibly due to an internal issue.
  • Solution: Try reloading the model or restarting the application to resolve any temporary issues.

"Sampling parameters out of range"

  • Explanation: This error occurs when one or more sampling parameters are set outside their acceptable range.
  • Solution: Verify that all parameters like top_k, top_p, and temperature are within their specified limits and adjust them accordingly.

Call LLM Advanced Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI-Llama
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.