This workflow delivers a modular Image Bypass pipeline for ComfyUI that combines non‑semantic normalization, FFT‑domain controls, and camera pipeline simulation. It is designed for creators and researchers who need a reliable way to process images through an Image Bypass stage while keeping full control over input routing, preprocessing behavior, and output consistency.
At its core, the graph generates or ingests an image, then routes it through an Image Bypass Suite that can apply sensor‑like artifacts, frequency shaping, texture matching, and a perceptual optimizer. The result is a clean, configurable path that fits batch work, automation, and rapid iteration on consumer GPUs. The Image Bypass logic is powered by the open source utility from this repository: PurinNyova/Image-Detection-Bypass-Utility.
At a high level, the workflow offers two ways to produce the image that enters the Image Bypass Suite: a Text‑to‑Image branch (T2I) and an Image‑to‑Image branch (I2I). Both converge on a single processing node that applies the Image Bypass logic and writes the final result to disk. The graph also saves the pre‑bypass baseline so you can compare outputs.
Use this path when you want to synthesize a fresh image from prompts. Your prompt encoder is loaded by CLIPLoader (#164) and read by CLIP Text Encode (Positive Prompt) (#168) and CLIP Text Encode (Negative Prompt) (#163). The UNet is loaded with UNETLoader (#165), optionally patched by ModelSamplingAuraFlow (#166) to adjust the model’s sampling behavior, and then sampled with KSampler (#167) starting from EmptySD3LatentImage (#162). The decoded image comes out of VAEDecode (#158) and is saved as a baseline via SaveImage (#159) before entering the Image Bypass Suite. For this branch, your primary inputs are the positive/negative prompts and, if desired, the seed strategy in KSampler (#167).
Choose this path when you already have an image to process. Load it via LoadImage (#157) and route the IMAGE output to the Image Bypass Suite input on NovaNodes (#146). This bypasses text conditioning and sampling entirely. It is ideal for batch post‑processing, experiments on existing datasets, or standardizing outputs from other workflows. You can freely switch between T2I and I2I depending on whether you want to generate or strictly transform.
This is the heart of the graph. The central processor NovaNodes (#146) receives the incoming image and two option blocks: CameraOptionsNode (#145) and NSOptionsNode (#144). The node can operate in a streamlined auto mode or a manual mode that exposes controls for frequency shaping (FFT smoothing/matching), pixel and phase perturbations, local contrast and tone handling, optional 3D LUTs, and texture statistics adjustment. Two optional inputs let you plug in an auto white‑balance reference and an FFT/texture reference image to guide normalization. The final Image Bypass result is written by SaveImage (#147), giving you both the baseline and the processed output for side‑by‑side evaluation.
NovaNodes (#146)The core Image Bypass processor. It orchestrates frequency‑domain shaping, spatial perturbations, local tone control, LUT application, and optional texture normalization. If you provide an awb_ref_image or fft_ref_image, it will use those references early in the pipeline to guide color and spectral matching. Begin in auto mode to get a sensible baseline, then switch to manual to fine‑tune effect strength and blend for your content and downstream tasks. For consistent comparisons, set and reuse a seed; for exploration, randomize to diversify micro‑variations.
NSOptionsNode (#144)Controls the non‑semantic optimizer that nudges pixels while preserving perceptual similarity. It exposes iteration count, learning rate, and perceptual/regularization weights (LPIPS and L2) along with gradient clipping. Use it when you need subtle distribution shifts with minimal visible artifacts; keep changes conservative to maintain natural textures and edges. Disable it entirely to measure how much the Image Bypass pipeline helps without an optimizer.
CameraOptionsNode (#145)Simulates sensor and lens characteristics such as demosaic and JPEG cycles, vignette, chromatic aberration, motion blur, banding, and read noise. Treat it as a realism layer that can add plausible acquisition artifacts to your images. Enable only the components that match your target capture conditions; stacking too many can over‑constrain the look. For reproducible outputs, keep the same camera options while varying other parameters.
ModelSamplingAuraFlow (#166)Patches the loaded model’s sampling behavior before it reaches KSampler (#167). This is useful when your chosen backbone benefits from an alternate step trajectory. Adjust it when you notice a mismatch between prompt intent and sample structure, and treat it in tandem with your sampler and scheduler choices.
KSampler (#167)Executes diffusion sampling given the model, positive and negative conditioning, and the starting latent. The key levers are seed strategy, steps, sampler type, and overall denoise strength. Lower steps help speed, while higher steps can stabilize structure if your base model requires it. Keep this node’s behavior stable while iterating on Image Bypass settings so you can attribute changes to the postprocess rather than the generator.
z_image_turbo_bf16 and still route results through the same processing stack.awb_ref_image and fft_ref_image that share lighting and content characteristics with your target domain; mismatched references can reduce realism.SaveImage (#159) as the baseline and SaveImage (#147) as the Image Bypass output so you can A/B test settings and track improvements.EmptySD3LatentImage (#162) batch size only as VRAM allows, and prefer fixed seeds when measuring small parameter changes.This workflow implements and builds upon the following works and resources. We gratefully acknowledge PurinNyova for Image-Detection-Bypass-Utility for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.
Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.