I learned a valuable lesson in college. Don’t trust scientific papers. Not overmuch, anyway. As they say in the Royal Society: Nullius in verba, which I will loosely translate as “Don’t take my word for it.”
As an undergrad, I took a psychology class with Julian Jaynes. He was the author of the book The Origin of Consciousness in the Breakdown of the Bicameral Mind. It is an idiosyncratic theory about how consciousness first arose in the human species. But since nobody really has any clue how consciousness got started or even what it is, there’s plenty of room for quirky ideas. Incredibly, despite being 40 years old, the book has never gone out of print.
The short version of his theory is that consciousness as we know it arose only a few thousand years ago. Before that, humans were “bicameral,” meaning one half of the human brain was giving orders to the other. As Jaynes says, “[For bicameral humans], volition came as a voice that was in the nature of a neurological command, in which the command and the action were not separated, in which to hear was to obey.” In other words, all humans used to behave like schizophrenics listening to hallucinated voices which compelled them to act.
For my class with Jaynes, I wrote a paper about schizophrenic hallucinations. This was the idea: if we could see that, during the auditory hallucinations of a schizophrenic, it actually did look like one side of the brain was “talking” and the other “listening,” that might provide some indirect evidence that Jaynes was onto something. But how could you observe such a thing? The answer, it seemed was to use a new (at the time) brain-imaging technology called PET, or Positron Emission Tomography. PET makes beautiful color images of the brain at work. Like this.
Journals are suckers for beautiful color images of brains. It sure looks important, doesn’t it? Some people call this brain porn.
So anyway, I was able to dig up a paper that imaged the brains of schizophrenics as they were hearing voices. At first I found the reference, but I didn’t have the full paper, so I called the author. Actually, I called his office. The author had already departed for another position. But his former officemate picked up the phone and kindly agreed to forward the paper to me. Then he said these words: “I wouldn’t trust the results of that paper if I were you.” Oh? Please go on. “I think his software is no good. The results you see could have more to do with bad programming than brain activity.” Although I suppose one might say the paper was demonstrating brain pathology, only in the investigator rather than the patient. But I took the point.
I was always grateful for the candor of that anonymous officemate, and I always remembered the lesson. These memories came back to me recently because a similar situation has come up with a brain imaging technique called fMRI. Here’s a headline for you: Bug in fMRI software calls 15 years of research into question. If their concerns prove true, as many as 40,000 papers could be invalidated. Exclamation point! And here’s some good background on the same topic from the New Yorker: Neuroscience Fiction.
The march of science is, as they say in the business, nonmonotonic. Beware of pretty pictures and obfuscated code.