Evolving the Big Brain

Brains are expensive. Your brain consumes about 20% of your resting metabolism. It’s easy to look back at the history of our species and tell a self-congratulatory story about intelligence. We got smart, and then we kicked ass, right? But early on in the process, it wasn’t clear that it was smart to be smart. In order to grow that big brain, you’re going to develop more slowly and divert scarce resources away from muscles and teeth. Slow, big-headed babies make a tempting snacks for passing carnivores. And you need to evolve the hardware before you can make the software for it. The social and cultural benefits of intelligence must necessarily lag behind the physiological development of the brain itself. So why bother growing a big brain given its outrageous metabolic cost and dubious payback? In evolutionary terms, you’ll do worse before you do better.

There is a growing consensus that the reason humans got smart had nothing to do with better hunting. The so-called “social brain hypothesis” asserts that intelligence likely grew from humans navigating complex social relationships. That is, intelligence helped solve the social problems that intelligence was causing. The resulting evolutionary feedback loop powered the rise of human-level intelligence. The key step was using intelligence to solve the problem of social cohesion at a level bigger than, say, the size of a chimpanzee troop. Once humans could live closely in large numbers without murdering one another, the planet was at our feet. Why did we get smart? We got smart because we had to compete with the other guy who was getting smart. Eventually, this worked out pretty well for us.

Curiously, a parallel to the rise of human intelligence is currently happening at the planetary level. Giant computer brains are expensive. Data centers, by some projections, will consume 7-12% of the total US electrical output. Why grow such a big AI brain, given its outrageous energetic cost? We’re recapitulating at a planetary scale the same thing humans went through at the physiological level, and the social brain may well be the link between the two. We’re at the very beginning of a fast-evolving feedback loop. Why did we make our AIs smart? We make them smart because we have to compete with the other guy who is making his AIs smart. As a long-term optimist, I believe that eventually this will work out pretty well for us.

Image by Midjourney

AI’s primary value may be in solving the social problems caused by AI and networked computing. As strangely circular as that sounds, those problems are real and present, so those remedies are necessary. We’re on the track now. We can’t opt out. A small example of AI coming to the social rescue: cleaning up YouTube comments. They are now pleasant and occasionally charming. But they were once toxic slime pools of racism and hate language. One of the bigger concerns about AI is that it can be deployed at scale to convince people to do one’s political bidding. This is the social media nightmare: engagement is driven by negativity, which fuels tribalism and social polarization. This reinforces conspiracy theories and distrust. Scared, angry people are easy to corral. But as it turns out, AIs are good at talking people out of their conspiracy beliefs.

As for my optimism, I appeal to deep history. How did the human race make any progress at all? How did the Renaissance rise from the violent swamps of Machiavelli’s thug-filled Italy? How came it that slavery was abolished? Bad things are happening in the world today, but it was ever thus. Behind it all, there is planetary species-scale learning. And now it’s happening at AI scale and speed. Many bad things will continue to happen, but AI is the only force capable of managing the fast-moving problems that our networked world is creating.