The Brush You Have – Kelly Boesch’s AI-Assisted Art

In the caverns of Lascaux, the canvas was stone and the pigment was dirt. Mud was the medium because the mud was there. That was the brush the artist had.

Every generation of artists is supplied with its own sets of media, often dictated by current technology. Painters in the nineteenth century benefited from advances in chemical engineering. For the first time in history they had a complete spectrum of colors brilliant, stable, and pure. Cadmium reds, chrome yellows, cobalt blues, and viridian greens brought Monet’s waterlilies to life. In any previous century they would have been muted and browned.

The nature of the medium also raises questions about the nature of the art. Earlier generations debated if photography could even be considered art. Or we might say Matisse’s paper cut-outs are no more than scraps cut from someone else’s paper. That’s not real art. But cut paper was the medium available to him, and he was an artist.

My favorite example of shifting media comes from the early days of YouTube. YouTube provided so much free digital video ore that an artist like Kutiman could mine and smelt it to create entirely new videos (like the fantastic Mother of All Funk Chords) solely from found pre-existing videos. Looked at one way, all he did was clip together other people’s videos. But from a different viewpoint, he was a highly-skilled artist working with a new medium: cast-off video scraps.

The compost of one generation become the pigment for the next. The brush doesn’t make the artist. The artist makes the brush. Kutiman painted with pre-existing videos just as surely as Botticelli painted with a horsehair brush.

All this brings me to the age of artificial intelligence. When we think of an artist using AI, it’s easy to think of the machine as doing the work of the artist. Computer, paint me something nice. This is AI as crutch. This scenario, in which the artist is excised, gives the artist too little credit.

An artist is an artist, and they will find a way to grab hold of the brush. Kelly Boesch is a video artist working in the world of AI. She uses these tools in a maximalist but still expressive way. You want to feel good about the future of self-expression? You want to see the new brush? Look at her work. It is unapologetic. AI is the brush, not the crutch. It’s not pretending to be something that it isn’t. And it’s amazing.

Watch.

As always, this question that hangs over cynics, doubters, and pessimists: If you think this is easy, could you do it? But more important, does it touch you the way art must? For me the answer is yes.

AI is just another brush, and I am delighted to watch Kelly Boesch wield it.

Evolving the Big Brain

Brains are expensive. Your brain consumes about 20% of your resting metabolism. It’s easy to look back at the history of our species and tell a self-congratulatory story about intelligence. We got smart, and then we kicked ass, right? But early on in the process, it wasn’t clear that it was smart to be smart. In order to grow that big brain, you’re going to develop more slowly and divert scarce resources away from muscles and teeth. Slow, big-headed babies make a tempting snacks for passing carnivores. And you need to evolve the hardware before you can make the software for it. The social and cultural benefits of intelligence must necessarily lag behind the physiological development of the brain itself. So why bother growing a big brain given its outrageous metabolic cost and dubious payback? In evolutionary terms, you’ll do worse before you do better.

There is a growing consensus that the reason humans got smart had nothing to do with better hunting. The so-called “social brain hypothesis” asserts that intelligence likely grew from humans navigating complex social relationships. That is, intelligence helped solve the social problems that intelligence was causing. The resulting evolutionary feedback loop powered the rise of human-level intelligence. The key step was using intelligence to solve the problem of social cohesion at a level bigger than, say, the size of a chimpanzee troop. Once humans could live closely in large numbers without murdering one another, the planet was at our feet. Why did we get smart? We got smart because we had to compete with the other guy who was getting smart. Eventually, this worked out pretty well for us.

Curiously, a parallel to the rise of human intelligence is currently happening at the planetary level. Giant computer brains are expensive. Data centers, by some projections, will consume 7-12% of the total US electrical output. Why grow such a big AI brain, given its outrageous energetic cost? We’re recapitulating at a planetary scale the same thing humans went through at the physiological level, and the social brain may well be the link between the two. We’re at the very beginning of a fast-evolving feedback loop. Why did we make our AIs smart? We make them smart because we have to compete with the other guy who is making his AIs smart. As a long-term optimist, I believe that eventually this will work out pretty well for us.

Image by Midjourney

AI’s primary value may be in solving the social problems caused by AI and networked computing. As strangely circular as that sounds, those problems are real and present, so those remedies are necessary. We’re on the track now. We can’t opt out. A small example of AI coming to the social rescue: cleaning up YouTube comments. They are now pleasant and occasionally charming. But they were once toxic slime pools of racism and hate language. One of the bigger concerns about AI is that it can be deployed at scale to convince people to do one’s political bidding. This is the social media nightmare: engagement is driven by negativity, which fuels tribalism and social polarization. This reinforces conspiracy theories and distrust. Scared, angry people are easy to corral. But as it turns out, AIs are good at talking people out of their conspiracy beliefs.

As for my optimism, I appeal to deep history. How did the human race make any progress at all? How did the Renaissance rise from the violent swamps of Machiavelli’s thug-filled Italy? How came it that slavery was abolished? Bad things are happening in the world today, but it was ever thus. Behind it all, there is planetary species-scale learning. And now it’s happening at AI scale and speed. Many bad things will continue to happen, but AI is the only force capable of managing the fast-moving problems that our networked world is creating.

To Hallucinate Is Human

Do you know the names of the three wise men in the Christmas story? They are Caspar, Melchior, and Balthazar.

Image by Midjourney

We know quite a lot about them. Caspar, the gold-giver, was an old man from Tarsus. Melchior, a middle-aged man from Arabian, brought the frankincense. And the myrrh came courtesy of the young man from Saba, Balthazar. This was all carefully recorded, as you know, in the Bible. Or rather, it might have been, if anyone had thought to write it down when they came a-visiting. But nobody did. The actual biblical text simply refers to an unspecified number of wise men and their gifts. Into this void, different traditions have supplied wildly varying backstories. Now might be a good time to mention that according to Armenian Catholics, the wise men are named Kagpha, Badadakharida, and Badadilma, while in the Syriac tradition, there might be a dozen of them. That’s a lot of myrrh.

Where do all these extra details come from? Nature abhors a vacuum, and humans abhor a vacant backstory. Somewhere along the way, somebody just made them up, and it stuck. In short, they were hallucinated.

Hallucination also happens to be a naughty habit of modern AI Large Language Models. Ask ChatGPT to describe what’s on a blank piece of paper, and it can fabricate the most wonderful details. Ask it for the biography of a semi-famous person, and some details will be accurate, while others will be reasonable-sounding fictions. We call this pathological, but we shouldn’t act so surprised. Hallucination is a hallmark of intelligence. Humans do this shit all the time.

Here’s another story I like, about the island of California. The first maps of California depicted it as an island, and this error persisted in hundreds of other later maps well into the 17th century. Take a look.

Image courtesy of Wikipedia

It’s not wholly inaccurate. It represents Baja California reasonably well. But note that there are two kinds of lines on this map. One is based on actual observation and may be said to represent reality. The other kind is a bullshitter’s ramble, a fabrication, a hallucination. The mapmaker crafted reasonable-looking wiggles for fictional rivers and coastlines (“I bet it probably looks like this…”). The problem is that the two kinds of lines look the same. It would have been nice if they included little footnotes like “Seriously, don’t sail here or you will hit these rocks and sink” or “I just made this river up LOL.”

The big question is: when is hallucination acceptable, and when is it a sin? The question comes down to this: what job is the text being hired to do? The map’s job is to describe the landscape, to prevent shipwrecks. It does this by being accurate. Over time, the map will become more accurate. Hallucination in cartography is a sin.

But religion isn’t cartography. The stories can shift so long as the truth they point to is stable and meaningful. This shows up in religious traditions all the time, and it just goes to show that the purpose of religion is not factual accuracy. Sometimes, hallucination is the right tool for the job. I think of it this way. Culture is a sort of big brain. Culture thinks in myth. Culture creates myth the way humans create memory. We continuously construct it so that we might make the world plausible and legible. We hallucinate. Sometimes that causes shipwrecks, but sometimes it’s freaking brilliant.

People get upset about AI’s tendency to hallucinate, but the AI is really just breezing through one section of the Turing Test. There are things that we think of as pathologies that can never be purged from intelligence because they are a consequence of it.

Hallucination is, of course, just a start. Once AIs start to insist that the world is flat, we’ll know they have at last arrived in the land of the truly intelligent.