Rapid random prototyping

Winkletter  •  29 Apr 2025   •    
Screenshot

The generative image models are pretty dang good these days. Midjourney, Imagen, Ideogram, and ChatGPT’s Image Generator can create almost anything I need. But I still find myself using Stable Diffusion on my PC, especially when I’m asking, “What do I need?”

One of my first steps redesigning my card set is deciding how I want it to look. Using a wireframe of my card layout as input I let the model cook today and generated 2000 images. They’re rough. The text is just garbled junk. But at this point it’s an idea generator. It lets me see how different options might look. What if the card represented a book cover? Or what if it looked like a page from a book?

At this point I’m prototyping, dropping in an image in a couple of the results, and imagining how I would make the cards and what they would feel like.

Comments

What tools are people using to fix the AI-image-text-is-junk conundrum? Cool-looking cards, by the way.

therealbrandonwilson  •  29 Apr 2025, 2:54 pm

I can think of two ways AI text is improving. First, some of the models are simply getting better at text generation. You can see this in action by searching Ideogram for “text”.

But also some of the models have “remix” capabilities where you can upload an image and use it as a partial input. This means you can write text into an image and the text will be more likely to be correct.

Winkletter  •  30 Apr 2025, 6:29 am

Discover more

Sourced from other writers across Lifelog

Ooops we couldn't find any related post...