Back

Fabricated

ARTIFICIAL INTELLIGENCE

Fashion Reimagined, Fashion Futurism

John Galliano AI Photoshoot

Fabricated, a futuristic study on Fashion

Fabricated started as a study on how far AI can go when pushed toward high-end fashion editorial imagery. The visual references were clear from the beginning: John Galliano's Dior years. His collections treated fashion as theater rather than simple clothing. The chaotic, saturated settings are reminiscent of Miles Aldridge and David LaChapelle's work, complementing the styling of the clothing.

The project began in ComfyUI, using a custom node workflow I built to generate hundreds of visual variations around specific looks, silhouettes, materials, colors, and environments. Image direction was guided through different prompts and models with levels of prompt adherence. I worked with Stable Diffusion with multiple models and LORAs and Flux Kontext for refined control.

Once I had a bulk of images that felt strong enough, I moved into selecting the best images. Those went through refinement using other AI tools, including Weavy, Gemini / Nano Banana Pro, ChatGPT, and tradicional editing workflows. This part of the process was focused on improving realism, fixing anatomy, polishing environments, refining faces and hands, and making the images feel closer to finished fashion editorials rather than raw generations.

The project became less about generating a single good image and more about building a full visual direction through iteration, selection, and control. It shows how far an idea can expand with AI, but also how much the final result still depends on art direction, taste, and human curation.

Fabricated AI fashion editorial red latex dress Fabricated AI fashion editorial blue satin mini dress Fabricated AI fashion editorial blue satin gown Fabricated AI fashion editorial rainbow dress
Fabricated AI model reference sheet

Model Reference Sheet

Part of this project involved creating a custom AI model and placing her into the editorial images. To do that consistently, I first generated a four-angle reference sheet: front, three-quarter, profile, and three-quarter back. Hair pulled back and no styling, so the face reads clearly from every direction.

From here, the model can be introduced into any of the Fabricated environments while keeping her features coherent across frames. Consistency in AI needs quality human direction.

Fabricated before image 1
Fabricated after image 1
Fabricated before image 2
Fabricated after image 2
Fabricated before image 3
Fabricated after image 3
Fabricated before image 4
Fabricated after image 4
Fabricated seed variation 1
Fabricated seed variation 2
Fabricated seed variation 3
Fabricated seed variation 4
Fabricated seed variation 5
Fabricated seed variation 6
Fabricated seed variation 7
SEED

Why it matters?

Seed, Adherence
& Strength

Every image generated in ComfyUI starts from a seed, a number that determines the random noise the model begins with. Change it and the entire result shifts. Same prompt, completely different image. These seven frames came from the same direction with seven different seeds.

Prompt adherence controls how closely the model follows your instructions. The sweet spot is where the model interprets rather than transcribes. Difference strengths dictate how much the model transforms the input when working from a reference. At low strength it barely moves. At high strength it changes completely.