ComfyUI > Nodes > ComfyUI GLM-4 Wrapper > GLM-4 Prompt Enhancer

ComfyUI Node: GLM-4 Prompt Enhancer

Class Name

GLM-4 Prompt Enhancer

Category
GLM4Wrapper
Author
Nojahhh (Account age: 3484days)
Extension
ComfyUI GLM-4 Wrapper
Latest Updated
2025-07-20
Github Stars
0.03K

How to Install ComfyUI GLM-4 Wrapper

Install this extension via the ComfyUI Manager by searching for ComfyUI GLM-4 Wrapper
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ComfyUI GLM-4 Wrapper in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

GLM-4 Prompt Enhancer Description

Enhances text prompts using GLM-4 for improved creativity and detail in content creation.

GLM-4 Prompt Enhancer:

The GLM-4 Prompt Enhancer is a sophisticated tool designed to augment and refine text prompts using the GLM-4 model, which is particularly adept at text generation and image-to-video captioning tasks. This node is part of the ComfyUI platform and serves to enhance the quality and creativity of prompts by leveraging advanced language models. It is especially beneficial for AI artists and content creators who seek to generate more engaging and contextually rich text outputs. By utilizing the GLM-4 model, the node can interpret and expand upon initial prompts, providing users with enhanced versions that are more detailed and nuanced. This capability is crucial for applications where the quality of text generation is paramount, such as in creative writing, storytelling, and multimedia content creation.

GLM-4 Prompt Enhancer Input Parameters:

GLMPipeline

This parameter represents the pipeline object that contains the GLM-4 model and tokenizer. It is essential for the node's operation as it provides the necessary tools for text processing and generation. The pipeline must be correctly initialized and loaded with the appropriate model to ensure accurate prompt enhancement.

prompt

The prompt parameter is the initial text input that you wish to enhance. It serves as the foundation upon which the GLM-4 model builds to generate a more refined and detailed output. The quality and specificity of the initial prompt can significantly influence the resulting enhanced text.

max_new_tokens

This parameter determines the maximum number of new tokens that the model can generate when enhancing the prompt. It controls the length of the output, with a default value of 200 tokens. Adjusting this value allows you to manage the verbosity of the enhanced prompt, with higher values producing longer outputs.

temperature

The temperature parameter influences the randomness of the text generation process. A lower temperature, such as the default value of 0.1, results in more deterministic outputs, while higher values introduce more variability and creativity. This parameter is crucial for balancing coherence and diversity in the generated text.

top_k

This parameter limits the number of highest probability vocabulary tokens considered during generation. With a default value of 40, it helps in controlling the diversity of the output. Lower values make the output more focused, while higher values increase variability.

top_p

The top_p parameter, also known as nucleus sampling, determines the cumulative probability threshold for token selection. With a default value of 0.7, it allows for dynamic adjustment of the token pool, balancing between diversity and coherence in the generated text.

repetition_penalty

This parameter applies a penalty to repeated tokens, encouraging more varied and less repetitive outputs. The default value is 1.1, which helps in maintaining the novelty of the generated text by discouraging excessive repetition.

image

The image parameter is optional and allows for the inclusion of an image to guide the prompt enhancement process, particularly useful for image-to-video captioning tasks. When provided, the image is processed and incorporated into the prompt enhancement, adding a visual context to the text generation.

seed

The seed parameter sets the random number generator's seed, ensuring reproducibility of the results. The default value is 42, which allows for consistent outputs across different runs with the same input parameters.

unload_model

This boolean parameter determines whether the model should be unloaded from memory after the operation. With a default value of True, it helps in managing system resources by freeing up memory once the task is completed.

GLM-4 Prompt Enhancer Output Parameters:

enhanced_prompt

The enhanced_prompt is the primary output of the node, representing the refined and expanded version of the initial input prompt. This output is enriched with additional context and detail, making it more engaging and suitable for various creative applications. The quality of the enhanced prompt is influenced by the input parameters and the capabilities of the GLM-4 model.

GLM-4 Prompt Enhancer Usage Tips:

  • Experiment with the temperature parameter to find the right balance between creativity and coherence for your specific task. Lower values yield more predictable outputs, while higher values introduce more variability.
  • Utilize the max_new_tokens parameter to control the length of the enhanced prompt, ensuring it fits the desired context or application.
  • If working with visual content, consider providing an image to guide the prompt enhancement process, especially for tasks involving image-to-video captioning.

GLM-4 Prompt Enhancer Common Errors and Solutions:

Model not loaded

  • Explanation: This error occurs when the GLM-4 model is not properly loaded into the pipeline.
  • Solution: Ensure that the GLMPipeline is correctly initialized and the model is loaded before attempting to enhance the prompt.

Invalid prompt input

  • Explanation: This error arises when the prompt parameter is not provided or is in an incorrect format.
  • Solution: Verify that the prompt is a valid string and is correctly passed to the node.

Image processing error

  • Explanation: This error can occur if the image parameter is provided in an unsupported format or is not processed correctly.
  • Solution: Ensure that the image is in a compatible format and correctly pre-processed before being passed to the node.

GLM-4 Prompt Enhancer Related Nodes

Go back to the extension to check out more related nodes.
ComfyUI GLM-4 Wrapper
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Models, enabling artists to harness the latest AI tools to create incredible art.