logo
RunComfy
ComfyUIPlaygroundPricing
discord logo
ComfyUI>Workflows>Outpainting | Expand Image

Outpainting | Expand Image

Workflow Name: RunComfy/Outpainting
Workflow ID: 0000...1058
The image outpainting workflow presents a comprehensive process for extending the boundaries of an image through four key steps, starting with preparation for outpainting, utilizing an inpainting ControlNet model for the outpainting process, evaluating the initial output, and concluding with edge repair to ensure a seamless integration

1. ComfyUI Outpainting Workflow

This image outpainting workflow is designed for extending the boundaries of an image, incorporating four crucial steps:

1.1. ComfyUI Outpainting Preparation:

This step involves setting the dimensions for the area to be outpainted and creating a mask for the outpainting area. It's the preparatory phase where the groundwork for extending the image is laid out.

1.2. ComfyUI Outpainting Process (Use Inpainting ControlNet model):

The actual outpainting process is executed through the inpainting model, specifically using ControlNet's inpainting module. In this phase, only the region designated by the previously created mask is addressed. This approach utilizes the inpainting model to generate the additional content required for the outpainting area. It’s crucial to understand that although we are extending the image (outpainting), the technique applied is derived from inpainting methodologies, governed by the ControlNet module that intelligently fills in the designated area based on the context provided by the surrounding image.

1.3. ComfyUI Outpainting Initial Output:

Here we obtain the initial version of the image with the newly outpainted area. This stage showcases how the inpainting model has extended the image boundaries. However, at this point, there may be noticeable distinctions between the edges of the original image and the newly extended parts. So the subsequent step is crucial for repair it.

1.4. ComfyUI Outpainting Edge Repair:

The final step focuses on refining the integration between the original image and the newly added sections. This involves specifically targeting and enhancing the edges to ensure a seamless transition between the original and extended parts of the image.

2. Detailed Introduction to ComfyUI Outpainting/Inpainting Process

2.1. ComfyUI Outpainting Preparation

Here are the key nodes involved in this step:

2.1.1. Image Scale to Side: Scale images based on specified parameters. You can set a target side length and choose which side (longest, width, or height) to scale. It offers several scaling methods (nearest-exact, bilinear, area) and an optional crop feature for maintaining aspect ratio.

  • Side Length: Define the target side length for scaling
  • Side: Choose the side of the image to scale (longest, width, or height)
  • Upscale Method: Select the preferred method for scaling
  • Crop: Enable cropping to maintain the original image's aspect ratio during scaling

2.1.2. Pad Image for Outpainting: Prepares images for outpainting by adding padding around the borders. This node allows specification of padding amounts for each side of the image and includes a "feathering" option to seamlessly blend the original image into the padded area.

2.1.3. Convert Image to Mask: Transforms a selected channel (red, green, blue, alpha) of an image into a mask, isolating a portion of the image for processing.

In this phase, the padded and masked images are prepared.

ComfyUI Inpainting ControlNet

2.2. ComfyUI Outpainting Process (Use Inpainting ControlNet model)

Here are the key nodes involved in this step:

2.2.1. Apply Advanced ControlNet: Apply the ControlNet node to meticulously guide the inpainting process, targeting the area outlined by the mask prepared in the first step.

2.2.2. Load ControlNet Model: Selects and loads the inpainting ControlNet model.

2.2.3. Inpainting Preprocessor: Send the padded and masked images, which were prepared in the first step, to the inpainting preprocessor.

2.2.4. Scaled Soft Weights: Adjusts the weights in the inpainting process for nuanced control, featuring parameters like base_multiplier for adjusting weight strength and flip_weights to inverse the effect of weights.

ComfyUI Outpainting Preparation

2.3. ComfyUI Outpainting Initial Output

At this stage, the initial outpainted image is generated. However, noticeable edges around the original image may be visible.

ComfyUI Outpainting Initial Output

2.4. ComfyUI Outpainting Repair Edge

This final step involves masking the edge area for regeneration, which improves the overall look of the outpainted area.

Here are the essential nodes involved in incorporating noticeable edges into the mask:

2.4.1. Mask Dilate Region: Expands the mask's boundaries within an image, useful for ensuring complete coverage or creating a larger boundary for processing effects.

2.4.2. Mask Contour: Involves identifying and outlining the edges within a mask, aiding in the distinction between different elements in an image.

ComfyUI Outpainting Repair Edge

This workflow is inspired by Ning

Want More ComfyUI Workflows?

BRIA AI RMBG 1.4 vs Segment Anything | Background Removal

BRIA AI RMBG 1.4 vs Segment Anything | Background Removal

Efficiently removes backgrounds by comparing BRIA AI's RMBG 1.4 with Segment Anything.

ReActor | Fast Face Swap

With ComfyUI ReActor, you can easily swap the faces of one or more characters in images or videos.

Stable Cascade | Text to Image

Stable Cascade | Text to Image

Stable Cascade, a text-to-image model excelling in prompt alignment and aesthetics.

InstantID | Face to Sticker

InstantID | Face to Sticker

Utilize Instant ID and IPAdapter to create customizable, amazing face stickers.

IPAdapter V1 FaceID Plus | Consistent Characters

IPAdapter V1 FaceID Plus | Consistent Characters

Leverage IPAdapter FaceID Plus V2 model to create consistent characters.

Portrait Master | Text to Portrait

Portrait Master | Text to Portrait

Use the Portrait Master for greater control over portrait creations without relying on complex prompts.

SDXL Turbo | Rapid Text to Image

SDXL Turbo | Rapid Text to Image

Experience fast text-to-image synthesis with SDXL Turbo.

LayerDiffuse | Text to Transparent Image

LayerDiffuse | Text to Transparent Image

Use LayerDiffuse to generate transparent images or blend backgrounds and foregrounds with one another.

IPAdapter Plus (V2) | Merge Images

IPAdapter Plus (V2) | Merge Images

Use various merging methods with IPAdapter Plus for precise, efficient image blending control.

IPAdapter Plus (V2) | Style and Composition

IPAdapter Plus (V2) | Style and Composition

IPAdapter Plus enables effective style & composition transfer, functioning like a 1-image LoRA.

IPAdapter Plus (V2) | Change Clothes

IPAdapter Plus (V2) | Change Clothes

Use IPAdapter Plus for your fashion model creation, easily changing outfits and styles

Stable Diffusion 3 (SD3) | Text to Image

Stable Diffusion 3 (SD3) | Text to Image

Integrate Stable Diffusion 3 medium into your workflow to produce exceptional AI art.

IPAdapter Plus (V2) Attention Mask | Image to Video

Leverage the IPAdapter Plus Attention Mask for precise control of the image generation process.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.