By Stuart Ritchie Bodley Head, 368pp, £18.99
When I gave talks on bioethical issues for the charity Life, for a while a staple of our discussions was the work of the Korean scientist Hwang Woo-Suk, who achieved international fame in 2004 for his apparent breakthroughs in embryonic stem cell technology. However, our presentations soon had to be revised when it emerged that much of Hwang’s data had been faked and that many of his practices had been extremely dubious. Hwang lost his job and was disgraced less than three years after being lauded as a pioneer.
This case is one of many which Stuart Ritchie, a neuropsychology researcher at King’s College London, highlights in his eye-opening book considering the various systemic problems in modern science. Hwang of course was an outright fraudster, but – while it is disturbingly common – that kind of deliberate and conscious deception is only one issue among many, and perhaps not even the most serious.
Alongside fraud, as the book’s subtitle Exposing Fraud, Bias, Negligence and Hype in Science suggests, Ritchie’s target is research which is consciously or unconsciously manipulated to get the “right” result; research which is badly conducted; and research which is greatly oversold.
You may have heard of the “replicability crisis”, or its close cousin the “reproducibility” crisis. These terms refer to the realisation among academics across a variety of disciplines that many influential and widely repeated findings have not been supported by subsequent researchers using the same methodology or even the same data.
By Ritchie’s account, there are problems at every stage of the scientific process, from the way in which experiments are conducted, via the tribulations of getting research published in journals, all the way through to the behaviour of the journals themselves. For example, it can be very difficult to get journals to retract papers, even when their faults have been widely publicised and acknowledged.
Ritchie quotes one estimate that fewer than 10 per cent of papers submitted to scientific journals are actually published, and argues that the processes by which journals determine whether to publish often lack transparency or rigour. This means that lots of important discoveries may not be published at all, while eye-catching or flashy but dubious findings make it into print.
He discusses at length the “file-drawer problem”, whereby research that does not generate the expected or desired result is quietly forgotten by those who carried it out. This is not necessarily done for sinister reasons but is highly problematic, not least because it seems especially likely to bury papers that find no effect from the variable being investigated. This may lead us, for example, to overestimate or misunderstand the effectiveness of a medical treatment.
One potential issue for the layman in this area is that a lot of the problems are related to data analysis, and the uses and abuses thereof. It’s not easy to write engagingly about such topics, but Ritchie proves to be a helpful guide to the complexities of modern data-driven science, a patient Virgil for the statistical novice. In particular he provides a helpful explanation of something I’ve often heard mentioned but never quite understood: the “p-value”, which is a way of putting a number on the reliability of a particular result.
The book is not merely a litany of complaints. Ritchie devotes long sections to suggesting how scientists can improve things, and has clearly thought deeply about the solutions. Indeed, his obvious reverence for the scientific method, and his zeal for reaching the truth regardless of ideological or political considerations, shine through Science Fictions, making it an uplifting as well as a compelling read.