Wouldn’t you love to present a TED talk one day? I know I would. Besides the fame factor, merely having an idea “worth spreading”, and such a large audience to share it with, must be a thrill. And yet I have this niggly feeling in the back of my mind.
You see, TED isn’t about truth. It’s about entertainment. (You might say Entertainment is TED’s middle name…) And since proper science is gritty, hard work, full of cautious phrases – called hedging – it doesn’t look nice in slick 15-minute HD video clips. Nassim Taleb even commented that TED is a monstrosity that turns scientists and thinkers into low-level entertainers, like circus performers. It has become part of the hilarious Science News Cycle. Ironically, this is why I f**king love science and TED is so popular. We do love “science” and “the pleasure of finding things out.” But most of this naive triumphalism is just an excuse to feel clever and special – more clever and special than those people who don’t know what you know, or didn’t buy that $6 000 TED Conference ticket. The way TED is structured creates a divide between those “in the know” and those outside. It’s a power play, in other words. We are information consumers, and have equated information with knowledge. Knowledge is power, and the knowledge brokers have become the power brokers.
Ironically, scientific knowledge isn’t as firm we would like to believe. This isn’t even about the high philosophy of science of Thomas Kuhn’s The Structures of Scientific Revolutions, but simply about the inaccuracy of the fancy peer-reviewed journal articles that make headlines. Over the past couple of months The New Yorker and The Economist have published numerous pieces on alarming research about research. This New Yorker article summarizes it well. It doesn’t mean that we have to be complete sceptics; I love my job in the sciences, and scientific research has enriched our lives in countless ways. But after having worked both in industrial research-and-development (R&D) and in the academic research environment, I can easily tell you who does better research: it is not those whose success is measured by the number of journal articles published, but those who depend on the real world cooperating with their calculations. As C.J. Langenhoven said,
The water finds the mistakes in the engineers’ numbers; God’s handiwork never miscalculates.
Doing good research is difficult for anyone, even (and especially) the most honest and critical researchers. Your mind has an amazing way of obfuscating possible errors – your subconscious filters out data and results that don’t fit before you’re even consciously aware of them. It’s called the Semmelweiss reflex. But most academic researchers aren’t angels or superheros valiantly fighting the spectre of ignorance. They are normal people competing for scarce positions, with limited time and money to complete research. Cutting corners here and there, not performing due diligence, dismissing data as “false negatives” and thus cherry picking results are not even controversial. You usually only hear it mentioned in statistics textbooks or the media, with a mixture of horror and amusement depending on whether the study was about cancer or dating. For example, The Economist reports that the pharmaceutical giant Bayer could only replicate the results of a quarter of 67 of some of the most important medical studies. Amgen (another biotech firm) could only corroborate 6 of 53 landmark cancer studies. Despite the oft-repeat maxim that “scientific knowledge is sure, because of the scientific method of repeatable experiments”, repeating someone else’s experiments would be professional suicide in academia. In an ideal ego-less world we would do so for the sake of Truth, but in the real world it simply doesn’t work like that.
A couple of years ago, physicist Alan Sokal set out to prove that you can get anything published in humanities journals. And so he deliberately wrote a nonsense-article on postmodernism and quantum gravity and sent it in the prestigious Social Text – a journal on postmodern cultural studies. It was published, to the great amusement of scientists everywhere. Unfortunately, science had its own Sokal moment recently. John Bohannon made up research about how chemicals in lichen slow cancer, and sent it to 304 peer-reviewed journals. Only 98 rejected it. The article was deliberately littered with experimental design and analysis errors, incorrect conclusions, and was from a fictitious university. The Scientific Method might be a wonderful ideal, but it seldom gets realised in practice.
So next time you hear phrases like “research has shown”, “scientists have proven”, or “I saw it on TED”, be a real scientist and bear in mind that all may not be as it seems.