logo
RunComfy
ComfyUIPlaygroundPricing
discord logo
ComfyUI>Workflows>FLUX ControlNet Depth-V3 & Canny-V3

FLUX ControlNet Depth-V3 & Canny-V3

Workflow Name: RunComfy/FLUX-ControlNet
Workflow ID: 0000...1115
Transform your creative process with FLUX-ControlNet Depth and Canny models, designed for the FLUX.1 [dev] by XLabs AI. This ComfyUI workflow guides you through loading models, setting parameters, and combining FLUX-ControlNets for unprecedented control over image content and structure. Whether you're using depth maps or edge detection, FLUX-ControlNet empowers you to create stunning AI art.

FLUX is a new image generation model developed by Black Forest Labs, The FLUX-ControlNet-Depth and FLUX-ControlNet-Canny models were created by the XLabs AI team. This ComfyUI FLUX ControlNet workflow was also created by the XLabs AI team. For more details, please visit x-flux-comfyui. All credit goes to their contribution.

About FLUX

The FLUX models are preloaded on RunComfy, named flux/flux-schnell and flux/flux-dev.

  • When launch a RunComfy Medium-Sized Machine: Select the checkpoint flux-schnell, fp8 and clip t5_xxl_fp8 to avoid out-of-memory issues.
  • When launch a RunComfy Large-Sized or Above Machine: Opt for a large checkpoint flux-dev, default and a high clip t5_xxl_fp16.

For more details, visit: ComfyUI FLUX | A New Art Image Generation

🌟The following FLUX-ControlNet Workflow is specifically designed for the FLUX.1 [dev] model.🌟

About FLUX-ControlNet Workflow (FLUX-ControlNet-Depth-V3 and FLUX-ControlNet-Canny-V3)

We present two exceptional FLUX-ControlNet Workflows: FLUX-ControlNet-Depth and FLUX-ControlNet-Canny, each offering unique capabilities to enhance your creative process.

1. How to Use ComfyUI FLUX-ControlNet-Depth-V3 Workflow

The FLUX-ControlNet Depth model is first loaded using the "LoadFluxControlNet" node. Select the "flux-depth-controlnet.safetensors" model for optimal depth control.

  • flux-depth-controlnet
  • flux-depth-controlnet-v2
  • flux-depth-controlnet-v3: ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution, with better and realistic version

Connect the output of this node to the "ApplyFluxControlNet" node. Also, connect your depth map image to the image input of this node. The depth map should be a grayscale image where closer objects are brighter and distant objects are darker, allowing FLUX-ControlNet to interpret depth information accurately.

You can generate the depth map from an input image using a depth estimation model. Here, the "MiDaS-DepthMapPreprocessor" node is used to convert the loaded image into a depth map suitable for FLUX-ControlNet. Key params:

  • Threshold = 6.28 (affects sensitivity to edges)
  • Depth scale = 0.1 (amount depth map values are scaled by)
  • Output Size = 768 (resolution of depth map)

In the "ApplyFluxControlNet" node, the Strength parameter determines how much the generated image is influenced by the FLUX-ControlNet depth conditioning. Higher strength will make the output adhere more closely to the depth structure.

2. How to Use ComfyUI FLUX-ControlNet-Canny-V3 Workflow

The process is very similar to the FLUX-ControlNet-Depth workflow. First, the FLUX-ControlNet Canny model is loaded using "LoadFluxControlNet". Then, it is connected to the "ApplyFluxControlNet" node.

  • flux-canny-controlnet
  • flux-canny-controlnet-v2
  • flux-canny-controlnet-v3: ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution, with better and realistic version

The input image is converted to a Canny edge map using the "CannyEdgePreprocessor" node, optimizing it for FLUX-ControlNet. Key params:

  • Low Threshold = 100 (edge intensity threshold)
  • High Threshold = 200 (hysteresis threshold for edges)
  • Size = 832 (edge map resolution)

The resulting Canny edge map is connected to the "ApplyFluxControlNet" node. Again, use the Strength parameter to control how much the edge map influences the FLUX-ControlNet generation.

3. Both for ComfyUI FLUX-ControlNet-Depth-V3 and ComfyUI FLUX-ControlNet-Canny-V3

In both FLUX-ControlNet workflows, the CLIP encoded text prompt is connected to drive the image contents, while the FLUX-ControlNet conditioning controls the structure and geometry based on the depth or edge map.

By combining different FLUX-ControlNets, input modalities like depth and edges, and tuning their strength, you can achieve fine-grained control over both the semantic content and structure of the images generated by FLUX-ControlNet.

License: controlnet.safetensors falls under the FLUX.1 [dev] Non-Commercial License

License

View license files:

flux/model_licenses/LICENSE-FLUX1-dev

flux/model_licenses/LICENSE-FLUX1-schnell

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

More ComfyUI Tutorials

ComfyUI FLUX: Guide to Setup, Workflows such as FLUX-ControlNet, FLUX-LoRA, and FLUX-IPAdapter... and Online Access

Want More ComfyUI Workflows?

ComfyUI Vid2Vid Dance Transfer

Transfers the motion and style from a source video onto a target image or object.

IPAdapter Plus (V2) | One-Image Style Transfer

IPAdapter Plus (V2) | One-Image Style Transfer

Use IPAdapter Plus and ControlNet for precise style transfer with a single reference image.

SUPIR | Photo-Realistic Image/Video Upscaler

SUPIR enables photo-realistic image restoration, works with SDXL model, and supports text-prompt enhancement.

Generate ENTIRE AI WORLDS Video Scene Builder

Turn simple footage into immersive cinematic AI landscapes instantly

FLUX Kontext Dev | Intelligent Image Editing

FLUX Kontext Dev | Intelligent Image Editing

Kontext Dev = Controllable + All Graphic Design Needs in One Tool

Consistent Character Creator

Create consistent, high-resolution character designs from multiple angles with full control over emotions, lighting, and environments.

Hallo2 | Lip-Sync Portrait Animation

Audio-driven lip-sync for portrait animation in 4K.

LatentSync| Lip Sync Model

Advanced audio-driven lip sync technology.

Follow us
  • LinkedIn
  • Facebook
  • Instagram
  • Twitter
Support
  • Discord
  • Email
  • System Status
  • Affiliate
Resources
  • Free ComfyUI Online
  • ComfyUI Guides
  • RunComfy API
  • ComfyUI Tutorials
  • ComfyUI Nodes
  • Learn More
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
RunComfy
Copyright 2025 RunComfy. All Rights Reserved.

RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.