ComfyUI > Nodes > ICEdit-ComfyUI-official > InContextEditInstruction~

ComfyUI Node: InContextEditInstruction~

Class Name

InContextEditInstruction

Category
In-context_Editing_Framework
Author
hayd-zju (Account age: 2265days)
Extension
ICEdit-ComfyUI-official
Latest Updated
2025-05-26
Github Stars
0.18K

How to Install ICEdit-ComfyUI-official

Install this extension via the ComfyUI Manager by searching for ICEdit-ComfyUI-official
  • 1. Click the Manager button in the main menu
  • 2. Select Custom Nodes Manager button
  • 3. Enter ICEdit-ComfyUI-official in the search bar
After installation, click the Restart button to restart ComfyUI. Then, manually refresh your browser to clear the cache and access the updated list of nodes.

Visit ComfyUI Online for ready-to-use ComfyUI environment

  • Free trial available
  • 16GB VRAM to 80GB VRAM GPU machines
  • 400+ preloaded models/nodes
  • Freedom to upload custom models/nodes
  • 200+ ready-to-run workflows
  • 100% private workspace with up to 200GB storage
  • Dedicated Support

Run ComfyUI Online

InContextEditInstruction~ Description

Enhance image generation with text-based instructions for precise artistic control using CLIP model embedding influence.

InContextEditInstruction~:

The InContextEditInstruction node is designed to enhance the creative process by allowing you to provide specific text-based instructions that guide the image generation process. This node leverages the power of a CLIP model to encode your textual instructions into an embedding, which is then used to influence a diffusion model. The primary goal of this node is to enable you to create images that closely align with your artistic vision by specifying how the generated image should differ from a reference scene. This is particularly useful for creating variations or edits to existing images, as it allows for precise control over the changes you want to see. By using this node, you can effectively communicate your creative ideas in a structured manner, ensuring that the resulting images reflect your intended modifications.

InContextEditInstruction~ Input Parameters:

editText

The editText parameter is a string input where you provide your specific edit instructions. This text should describe how you want the generated image to differ from a reference scene. The input supports multiline text and dynamic prompts, allowing for complex and detailed instructions. This parameter is crucial as it directly influences the output by guiding the diffusion model to incorporate the specified changes into the generated image.

clip

The clip parameter requires a CLIP model, which is used to encode the editText into an embedding. This embedding serves as a conditioning input for the diffusion model, ensuring that the generated image aligns with the provided instructions. The CLIP model is essential for translating textual instructions into a form that the diffusion model can understand and act upon.

InContextEditInstruction~ Output Parameters:

In_context

The In_context output is a conditioning that contains the embedded text derived from your editText input. This output is used to guide the diffusion model, ensuring that the generated image reflects the modifications specified in your instructions. The conditioning acts as a bridge between your textual input and the visual output, making it a critical component in the image generation process.

InContextEditInstruction~ Usage Tips:

  • Ensure that your editText is clear and descriptive to achieve the desired modifications in the generated image. The more specific your instructions, the better the diffusion model can interpret and apply them.
  • Use a well-trained CLIP model to improve the accuracy of the text encoding, as this directly impacts the quality of the generated image.

InContextEditInstruction~ Common Errors and Solutions:

ERROR: clip input is invalid: None

  • Explanation: This error occurs when the clip parameter is not provided or is invalid. The node requires a valid CLIP model to encode the text.
  • Solution: Ensure that you have selected a valid CLIP model. If you are using a checkpoint loader node, verify that the checkpoint contains a valid CLIP or text encoder model.

InContextEditInstruction~ Related Nodes

Go back to the extension to check out more related nodes.
ICEdit-ComfyUI-official
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.