Use + Remix

Science is obsessed with measurements of success. Very few of them mean much at all.

Impact factors, citation counts, and university rankings are about as useful to science as another consonant is to this player. : Cedric Chapuis (Flickr) CC 2.0 Impact factors, citation counts, and university rankings are about as useful to science as another consonant is to this player. : Cedric Chapuis (Flickr) CC 2.0

Science is obsessed with measurements of success. Very few of them mean much at all.

New Zealander Nigel Richards has won the French Scrabble championship twice. What’s more remarkable than double wins is that Nigel doesn’t speak French. He applied his prodigious brain to the task of memorising words from the French dictionary, bypassing the need for understanding.

In 2022, The Lancet medical journal achieved a feat that has parallels with Nigel’s Scrabble win. It achieved the highest ‘impact factor’ of any scientific journal in history. This number does not measure noble goals such as improvements in the length or quality of life. Instead, impact factor is a numbers game tangential to the purpose of medical research. Like Nigel’s Scrabble game, it is success without substance. 

The impact factor is calculated using the total number of times scientists have referenced papers from the journal, so it measures attention, both good and bad. The average journal article in medicine gets 15 citations over 10 years. As an example of bad attention, The Lancet’s famously discredited study that linked the MMR vaccine to autism has more than 1,800 citations. 

Citations are often inaccurate, with researchers referencing findings that were not in the paper, or misunderstanding the paper. Many citations are fleeting, with authors using long lists of citations as a way of demonstrating their in-depth knowledge of a field. 

The desire to win the impact factor game has predictably led to bad behaviour, with journals seeking to inflate their impact factor by fiddling their data. Some citations are even corrupt, with journals and reviewers manipulating citations to make themselves look better.

Real improvements are what journals should aspire to, but measuring how a journal has helped people’s lives is extremely difficult. Instead, we are lumped with the impact factor that is simple to measure, but measures nothing of value. 

To win the impact factor game, journals need to publish papers that will garner a lot of attention. This creates an incentive for journals to publish headline-grabbing papers about new breakthroughs, while important papers debunking existing treatments can find it harder to get in a ‘top’ journal. 

This also creates an incentive for researchers to work on new and exciting breakthroughs. But there’s a lot of much-needed research on relatively mundane parts of health. For example, governments could save an enormous amount of money if they stopped providing treatments that have no scientific evidence

One headline-grabbing paper published by The Lancet has had 5,878 citations – the Scrabble equivalent of playing “quiz” on a triple letter score. This paper grabbed the headlines because it was an early study of the risks of dying from COVID-19 from a hospital in Wuhan, China. But this paper includes a serious flaw. The calculations excluded patients who were still alive, creating an enormous potential for reverse-survivor bias to skew the results.

For scientists, this flaw is easy to spot, so it’s not clear why it wasn’t spotted in the peer review process, where other experts check over the paper before publication. But The Lancet has also missed other flaws, for example another headline-grabbing COVID-19 paper that needed readers to point out impossibilities in the data. Possibly the urgency of the pandemic meant that peer review was trumped by attention.

It’s not just journals that are distracted by flimsy numbers. Scientists are also vulnerable to citation competitions rather than competing to do the best science. Scientists can boost their citation scores by citing their own work. Bigger boosts come from working together, and citation cartels have been discovered where groups of scientists make a pact to cite each other’s papers.

Universities are also prone to meaningless metrics, as they compete in international rankings tables, such as QS, Times Higher Education and ARWU. The idea that any university could be summed up using a single number is something that primary school children could understand. Nevertheless universities — the pinnacle of education — are enslaved to these numbers. And what are the tables based on? Citations feature heavily, and other numbers that are easy to measure but do not measure the quality of education.

But recently two heavyweight US law schools have withdrawn from the biggest league table used in the US. This is a bold decision as league tables can influence student numbers and higher education policy. The schools took a stand because the tables are “using a misguided formula that discourages law schools from doing what is best for legal education.”

Scientists, journals and universities have become slaves to misguided formulas based on meaningless data. Science is one of humanity’s proudest achievements, but scientists are human and have become distracted by prestige games that do nothing to advance science.

Now more than ever, science needs to be performing at its peak, as humanity deals with its biggest ever challenges. It’s time to stop the counting games and work on what really counts.

Adrian Barnett is a professor of statistics who has worked for over 27 years in health and medical research. He was the president of the Statistical Society of Australia from 2018 to 2020. His current research at QUT concerns improving research practice to reduce waste. He declares no conflict of interest.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service