A disturbing trend is developing: the number of retractions in scientific journals is increasing at an exponential rate. This article found in the April 16th New York Times gives a good description of the problem and a couple of its causes. In essence, the pressure to churn out paper after attention-grabbing paper is so high that researchers are more willing than ever to submit bad or even false work.
The number of publications is now one of the major determinants of a researcher’s “worth” at a university. The paucity of tenure-track positions, along with the dwindling number and value of scientific grants and increasing dependence on grants to subsidize salaries, make a well-padded publication list necessary for young and ambitious scientists.
The “publish or perish” mentality isn’t just a reality for new and established professors, as any of my fellow grad students (trying desperately to gain funding) can attest.
The problem with expecting a researcher to produce a steady stream of ground-breaking innovations is that science doesn’t work that way! In fact, most of the time scientific research feels more akin to this:
The other problem is monitoring and verification. Even though reproducibility is practically the definition of scientific knowledge, re-doing someone else’s experiment doesn’t build your standing in the scientific community or savings account. A surprising number of published results can’t be reproduced, creating a corollary problem to the number of retractions: negative results are rarely published. Undetected false positive results inject falsehoods and confusion into our collective knowledge and damage science’s deserved perception of legitimacy.
So what can be done? University of Virginia Psychologist Brian Nosek‘s answer is the Reproducibility Project, an open collaboration to audit all of 2008’s original research published in Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, Memory and Cognition.
Are the authors of those studies worried (insulted)? Maybe. Should that dissuade Nosek from trying to improve the scientific reliability and accuracy of his field? Definitely not! Nosek is careful, and correct, to note that failure to reproduce does not necessarily mean the original study is faulty nor that the results are not true, but the overall project results will improve psychology’s accumulation of knowledge.
The continuation of a publication-based reward system that values quantity over quality is eroding the quality of the literature and public faith in science. It’s time for all scientific fields to embrace the honesty and necessity of this type of self-audit. After all the first step of healing is admitting you have a problem.