When Retraction Watch was set up in 2010 by Adam Marcus and Ivan Oransky, the idea of highlighting academic malfeasance was seen as unusual. Researchers didn’t peddle disinformation: they were the arbiters of truth, and the custodians of society’s collective knowledge.
In the twelve years since, the scales have fallen from our eyes as the scale of academic falsity has been uncovered – through the work of Marcus, Oransky, and a host of others. When they set up Retraction Watch, they believed there were around three retractions published by journals a month. It was actually forty-five. Now it’s closer to 300.
The scale is increasing alongside the severity of the potential fabrication. Just last month, Science published an investigation that alleges dozens of papers looking at Alzheimer’s could contain signs of fabricated information.
It doesn’t surprise Oransky, co-founder of Retraction Watch. “The ways to commit some kind of misconduct are almost becoming industrialized or mechanized,” he says. “They’re certainly capturing a lot of attention now, which is frankly a good thing. But there have been lots of problems in the scientific literature for decades, and you could argue centuries.”
When Oransky signed up to set up Retraction Watch, he never imagined he’d still be focusing on research today. “I didn’t think it would take over this much of my life,” he says.
There are different degrees of academic falsehood, though all have the same, problematic impact: undermining the legitimacy of real research carried out by experts, and bringing into question whether any research out of universities should be trusted.
At the small scale, there’s the geeing up of comparatively underwhelming results into something more significant; the massaging of data to present stronger correlations than are really there, reanalyzing data until something shows statistical significance, or to shunt correlation into the scientifically-desired field of causation.
That’s bad enough, because it misrepresents real results, upon which entire fields of science can be based. From there, things go downhill – and fast. There’s the fabrication of some elements of research, such as the alteration or duplication of images in studies into Alzheimer’s that put a previously unknown molecule toward the center of the field’s investigations. There’s pulling out of thin air entire arms of trials, and the creation of fake data, as in the case of Japanese anesthesiologist Yoshitaka Fujii, whose deceptive practices meant that half of the 200 or so papers he was credited as an author on were retracted in 2012. And there’s the entire outsourcing of articles to paper mills, who not only create fake data, experiments and findings, but then also produce reams of accompanying text that aren’t even by the authors who claim ownership of the research.
It all adds up to a huge problem for academia, where the shadow of distrust looms large over almost everything published nowadays.
But why does such systematic, trust-shattering fakery happen? And how do we fix the system to ensure it doesn’t keep occurring?
“For those who engage in misconduct, the desire to get recognized and advance is stronger than the fear of getting caught,” says Nicholas Steneck, professor emeritus of the history of science at the University of Michigan, who has long dedicated his career to research integrity. “You see others cut corners and get rewarded rather than caught. Maybe you can cut corners without getting caught.”
Steneck believes that the way academia is structured makes it too easy to cut those corners without getting caught. “Research is largely policed by professional organizations and colleagues,” he explains, because “academic institutions are too complex to keep on top of everything.” There’s a celebration of achievement, with little understanding of what goes into making that achievement – and little care about the underlying work that ensures those achievements happen. “They track the accomplishments of their researchers but not the quality of the research,” Steneck says. “Doing the latter is complex and expensive.”
All academics and their institutions care about, says Steneck, is their h-index – the number of citations their work receives. “Citations are the academic equivalent of followers,” Steneck says. “Nobody reads papers anymore,” admits Oransky. “When looking at someone’s credentials or whether someone should get a job, it’s all about where you publish and how many papers you publish.” Citations are recognition, Steneck explains, and are valued above all else, which means there’s an incentive to ensure you’re publishing work that stands out of the ordinary and is likely to be cited as unusual or unique in its field.
This seems to be a broader problem of today. As anyone who’s lived in the social media era knows, big tech platforms incentivize people to post ever more extreme content. “I’d be shocked if we didn’t see such fraud and misconduct and sloppiness, given the way the incentives are,” says Oranksy. “Academia, universities, and institutions have built this system. The outcome is entirely predictable. Maybe that’s cynical. But that’s how it is.”
The shift in the world of academia hasn’t gone unnoticed by others. Stuart Ritchie is the author of Science Fictions: Exposing Fraud, Bias, Negligence, and Hype in Science, as well as a lecturer at the Institute of Psychiatry, Psychology, and Neuroscience at King’s College London. “I think there are things in the academic system that make it worse,” he says of fraud. Key among them? “The hyper-competitive nature of the academic system that we have now that we didn’t have fifty years ago,” he says.
A 2022 study of institutions by the American Association of University Professors finds that fifty-four percent of institutions have replaced the tenure track with contingent appointments in the last five years. The disappearance of the tenure track spells disaster for overworked academics, in large part because it makes their roles far more precarious. A similar issue blights UK academia, where many staff are on rolling contracts without any long-term job security, meaning they’re constantly pressured to prove their worth to their institution in the form of stunning results in research.
Charitably, some might say that researchers having to constantly show their value to the academy is no bad thing: too many tenure track professors spend more time sunning themselves on beaches than they do in laboratories or in offices. But the hypercompetitive nature of academia, where more applicants apply for jobs than there are roles, means that you’re not just expected to show your worth but to excel – constantly, while in many parts of the world also juggling administrative and pastoral duties, as well as a teaching workload. “It’s just unrealistic publishing expectations,” says Jennifer Byrne, professor of molecular oncology at the University of Sydney’s school of medical sciences.
It means that academics aren’t just pressured into doing research, but are also encouraged to build their brand and boast about their credentials, including touting their latest groundbreaking research – even if it isn’t all that groundbreaking. The levels of deception range from overegging mild results into major ones, to outright generation of fictional results. “The reasons that incentivize people to fabricate things are the same that incentivize people to slightly bend the rules in ways that don’t count as fabrication,” says Ritchie. It’s the pursuit of fame and notoriety in the field that ensures job security and higher roles in the academic pyramid. “Egotism and power and all those things play a role in every human endeavor,” says Oransky. “I don’t think academia is any different.”
But those incentives can either result in the fabrication of data, or at worst, the total outsourcing of a paper to someone other than its supposed author. “We study a particular kind of problem that is a systematic form of research fraud which involves selling research manuscripts,” says Byrne. “We think these manuscripts are probably fabricated en masse by these organizations called paper mills,” she says.
When it comes to medicine, these kinds of papers are particularly pernicious because their entire fabrication can lead people to believe there have been medical breakthroughs in fields that sorely need them. Byrne has devoted her life to the field of oncology – and in cancer, it literally can be the difference between life and death for a big breakthrough in treatment.
As to why such kind of wholesale fraud happens, Byrne is clear: “Lots of people are under really extreme pressure to publish really bad research and policy,” she says. “It’s asking ridiculous things of people – and of course the only way they can manage is to cheat.”
Some wonder whether industrial incentives push academics who would otherwise be more impartial and subdued about their research to claim blockbuster results. The recent Alzheimer’s allegations were first uncovered in late-2021 because an attorney investigating one experimental drug for the treatment of the disease believed some research relating to the drug was “fraudulent”. But to claim hard-nosed business and the need to turn a profit is tainting academic research is a fiction as bad as some of the research published today, says Oransky. “Researchers have always been funded by industry,” he says. “There’s this misperception of a golden era – clinical trials have always been funded by industry.”
Besides industry’s impact, Byrne first began getting interested in faked research in 2015, when she began noticing papers published on a gene she had always kept an eye on as part of her research into predisposition for cancer and the genetic changes in childhood cancer. “It was a bit like a remote family member that you check in on from time to time,” she says. “I realized in 2015 some people were publishing on this gene, and I really couldn’t understand why because it wasn’t very interesting to study.”
Byrne began digging into the papers and recognized some troubling signs. They were similar to each other in ways that didn’t make a lot of sense; they also included common errors. She dug deeper, then one day had a realization. “I ran into the problem of paper mills, and went: ‘Oh my god, I think that’s what I’m studying.’”
She believes that the paper mill con works like this: academics generate faked papers either under their own names or fabricated ones, and submit them to journals known to be willing to look the other way. “We think that there’s almost certainly some degree of collusion with journals,” she says. “That’s possibly happened in the past through guest editors.”
Byrne reckons that it’s impossible that journal editors aren’t in on the con in some way – especially when it comes to publishing papers minted out of mills like those she studies. “The rate limiting step for paper mills is the peer review process,” she says. “They can invent papers at almost any rate, but eventually they’ve got to find homes for them. They’re likely to invest a lot of resources in ways to either manipulate or sidestep peer review.”
One key question is whether the fabrication and fakery problem is getting worse, or we’re just getting better at spotting it and calling it out. “We don’t really have much clue,” says Byrne – an assessment Ritchie agrees with. “I think it’s probably getting worse, but it’s getting worse partly because every year more papers are getting published than were published in the previous year,” she says. “Almost anything’s going to appear to get worse.”
Oransky reckons that while it’s less likely that someone falsifying academic research will get caught now than ten or fifteen years ago, the likelihood is still miniscule – and that’s a problem. “I was quoted somewhere saying this once, and I still stand by it: The most likely outcome today – and I hope this changes – for someone who commits fraud is a long and successful career,” he says. “People don’t get caught.”
Byrne also believes that these zombie papers proliferate through citations that legitimize them faster than the debunking that discredits them. Because academics are encouraged to show they’ve studied related work and conduct background research before they share their own findings, these papers often end up gaining credibility through sheer mass of citations, even if they’re subject to a review and later retraction. “It’s kind of like literally flogging a dead horse,” she says. “We need much quicker ways of flagging things that are problematic before they get cited and incorporated into systematic reviews and stuff like that.”
The idea that science is a self-correcting field is a “fiction”, says Oransky. “That’s just nonsense. The people who are getting caught nowadays – and that’s not everyone – nothing is happening to them.” Often, papers aren’t being retracted or corrected. They’re living on in perpetuity, lost in the massive wave of research that sweeps across people’s desks.
How to fix that problem is another, bigger question – and unlike some of the falsified research that seeps into the academic library, there doesn’t seem to be an easy answer. The peer review process needs rewriting, says Oransky, because it’s not currently fit for purpose. There should be oversight across the whole chain of publication, from initial grant proposals to pre-publication peer review – as well as post-publication monitoring by the community. Ritchie believes that stricter punishment is required for those who are caught, suggesting they should be prevented from applying for funding for a year, or have to go to meetings that oversee their research. “Punishing fraudsters who actually get caught might be a good idea,” he says, “though it gets into debates about whether deterrence works.”
Carrots as well as sticks are needed to cure the problem, reckons Byrne. “There’s prevention and cure,” she says. Making policy changes that prevent predatory journals from publishing poor papers are needed. “But in practice, that will take time, because if something’s been going on for fifteen years or more, you can’t just switch it off like a light,” she says. On the cure side of things, she believes journals need to probe their own archives to look for falsified information.
They’re all significant overhauls of the process that currently defines academic publishing. Smaller steps may be more achievable. One key starting point, suggests Oransky? “Have people actually read papers.