A few months ago I saw MIT finance professor Andrew Lo give a talk about the causes of the (what are we calling it now?) Great Recession. Professor Lo has a marvelously MIT-ish title: Director of the MIT Laboratory for Financial Engineering. I’m picturing Bunsen burners cooking murky brews of currency to test their liquidity. But I digress.
Lo is an entertaining speaker and did an excellent job explaining the mechanics of the collapse of the credit markets. But he then went on to discourage us from looking for scapegoats. He’s become fascinated by the nature of human behavior and what’s known as “normal accident theory”. The idea, first formulated by Charles Perrow in his eponymous book on the subject, is that when systems reach a certain level of tightly-coupled complexity (and especially when these systems are profitable, politically valuable, and generally successful), it can be impossible to prevent multiple small failures from cascading into disasters. Airplane accidents, nuclear reactor meltdowns, credit market disasters, oil rig fires, these all fit the model of normal accidents. They’re all protected by a vast web of safety measures that usually work very well.
Usually. In fact, the better your safety record, the easier it is to set up a really big disaster. Lo, explaining why normal accidents happen in the context of Wall Street, asked us to imagine telling the CEO of Lehman Brothers to shut down his most profitable department because the market is overheating. It’s quite simple: No one will stop a profitable locomotive, even when it’s clearly headed over a cliff. Nothing can stop the train. Nothing except crushing impact with the ground.
What’s the answer? As you might expect from an academic, Lo said: “More knowledge.” We need more PhDs, more smart people to help us understand these fast-moving financial innovations so they can be codified and regulated.
I can’t say I disagree with him; I like knowledge too. But I see an unsatisfying meta-loop, a lurking arms-race logic. More knowledge leads to more innovation, which leads to more poorly understood coupling. You always want to be faster than your problems. That’s the value of being smart. But as you race ahead, you plow up a bow wave of new problems that are just as fast as you.
Autocatalytic systems, which is to say self-interested systems that modify themselves, are the most fascinating and terrifying things in the universe. They usually work well, but you can never guarantee they won’t burst into flame tomorrow afternoon.
Don’t be alarmed. Things like this will happen from time to time.
The underlying problem with these systems is that discovery and innovation are fast but understanding is slow, and our culture rewards people who can sound like they understand the innovations. If you look at any of the recent disasters, there are a bevy of experts explaining how they had not expected the whatever to happen because of blah blah blah… If any one of them had had any real understanding of the systems they were running — like the handful of hedgefund gurus who years earlier foresaw the subprime collapse did — they wouldn’t be surprised by the disaster. Here in the hypothesis-driven halls of science, we are discouraged from saying, “I don’t know,” even though we all know that that statement is the lodestone to the research that will ultimately help us understand.
http://en.wikipedia.org/wiki/Science_2.0#Data_driven_science