Using Midjouney’s feature of taking a simple sketch and a text prompt to generate a base reference image that will be used to generate other images. First I created a simple sketch of a cat by tracing an image and using only the tracing for the starting image. Then I uploaded the image to a Discord channel for MidJourney. Combining the sketch and a prompt, I got back a set of four Images that followed the image prompt well. The resulting image matched the composition and details of the sketch. The image result is used as the new reference image. The result created four images, one of which is used as the featured image for this post.
<reference image> watercolor painting of a cat sitting on a table, staring past the camera
Prompt Details: <reference image> (URL to image) Watercolor (medium) Painting (style of art) of a cat (subject) sitting on a table (location | position), staring past the camera (affects the focus of subject relative to the camera)
Update: I used the simple cat illustration to control image generation in Stable Diffusion. Take a look at the article How Line Art Controls the Drawing: An Introduction to ControlNet’s LineArt Model