This workflow turns a short text prompt into a seamless texture, then converts that texture into full PBR material maps using the CHORD Model. Built for material artists, environment teams, and technical artists, it produces a tileable texture along with base color, normal, roughness, metalness, and an additional height map for displacement-ready assets.
The graph follows the two-stage generate and estimate idea behind the CHORD Model: first synthesize a tileable texture, then decompose it into SVBRDF channels suitable for real-time engines and DCC tools. You can also skip generation and feed any reference texture directly to the estimation stage.
This graph is organized into two groups that can run end to end or independently. Group 1 creates a tileable texture from text. Group 2 runs the CHORD Model to estimate PBR maps from that texture or from a texture you provide.
This group turns your prompt into a seamless, flat-lit texture. The prompt is encoded by CLIPTextEncode (#4) and sent to KSampler (#7), which samples the z_image_turbo UNet with an AuraFlow scheduler set by ModelSamplingAuraFlow (#2). An empty latent from EmptySD3LatentImage (#6) defines the working resolution and batch. The decoded image from VAEDecode (#9) is saved as a reference texture and also forwarded downstream for material estimation. Write prompts that call out material identity, microstructure, and tiling intent, for example include phrases like seamless tiling and orthographic top-down.
This group loads the CHORD Model with ChordLoadModel (#12) and prepares the texture with ResizeAndPadImage (#11) to a square canvas. ChordMaterialEstimation (#20) predicts base color, normal, roughness, and metalness directly from the input texture. The graph also produces a height map by converting the predicted normal using ChordNormalToHeight (#18), which is valuable for displacement or parallax workflows. If you already have a texture, bypass Group 1 and feed it here; keep it flat-lit and free of baked shadows for best CHORD Model results.
CLIPTextEncode (#4)Encodes your text into conditioning for the texture generator. Be explicit about material class, surface qualities, and tiling intent. Terms like orthographic, seamless, grout lines, pores, fibers, or micro-scratches help the generator produce structures that the CHORD Model can decompose reliably.
KSampler (#7)Drives the latent diffusion process that creates the texture. Use it to trade speed for fidelity, switch samplers, and explore variations via the seed. A blank negative prompt is provided by ConditioningZeroOut (#5); add typical negatives only if you see artifacts you want to suppress.
ModelSamplingAuraFlow (#2)Applies an AuraFlow-style scheduling to the UNet for sharper, coherent texture synthesis with z_image_turbo. Change the scheduler here when you experiment with different sampling behaviors packed with the model.
ChordMaterialEstimation (#20)Runs the CHORD Model to estimate SVBRDF maps from the input texture. Results are production-ready base color, normal, roughness, and metalness. Use flat, evenly lit inputs without perspective to maximize accuracy; complex shadows or highlights can bias the decomposition.
ChordNormalToHeight (#18)Converts the CHORD-predicted normal into a height map suited for displacement. Treat height as a relative surface signal and calibrate intensity in your renderer to match the intended scale.
EmptySD3LatentImage (#6)Sets the canvas size and batch for texture synthesis. Choose a square resolution that matches your downstream material targets and keep this consistent across generations for predictable texel density.
ResizeAndPadImage (#11) in Group 2.This workflow implements and builds upon the following works and resources. We gratefully acknowledge Ubisoft La Forge for the CHORD (Chain of Rendering Decomposition) model, for their contributions and maintenance. For authoritative details, please refer to the original documentation and repositories linked below.
Note: Use of the referenced models, datasets, and code is subject to the respective licenses and terms provided by their authors and maintainers.
RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.