One way to play with ai art is to upload a basic sketch and let the robots finish it – a friend showed me some of their ‘ai evolved’ drawings. Starting from a very rough 5 minute sketch, a vaguely similar but smooth and polished version was created. The pose was the same, but the proportions were corrected, the skin was airbrush smooth and the model now had heavy makeup. I don’t know which specific ai he was using, but he (an older gentleman who quite possibly used airbrushes back in the 80s) was very happy with the results. An AI often does quite well with things that it has seen a lot of, such as scantily clad women posed in the style of classic models.
But. It still doesn’t “know” a lot of things, eg, it’s surely seen photos of hands, millions of them, but somehow often doesn’t always entirely understand that an arm usually only has one. And birds … it really does not quite understand how they work, though it’s definitely seen pictures of some.
This time I fed in an old ink drawing of mine, of a gravid kiwi skeleton.
I added the verbal prompts “Skeleton of a kiwi bird, with an egg inside. Scientific illustration, intricately detailed” using Stable Diffusion, via Night Cafe).
Result: delightful bizarreness, that does have a sort of scientificky illustrationishness.
Trying the same prompt, but without the prompting image: well, it’s more colourful, so the first effort took something from my sketch. Uh, but still.
Shifting the words to “kiwi skeleton, scientific illustration, art print” and trimming the edges of my starting-point drawing … better, certainly more in the original drawing’s style. Sort of. Yeah. Something there to adapt and evolve.