x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
11
Apr
2023

Publication metrics don’t have to drive academia

Emanuel Kulczycki

Impact of research evaluation

Today, researchers are publishing more than ever before. New assistant professors have already published twice as much as their peers did in the early 1990s to secure a position in top departments or to achieve tenure. Nobel laureate Peter Higgs believes he wouldn’t be deemed “productive” enough for academia in today’s world. However, merely publishing more papers doesn’t cut it. The number of citations those papers receive is the true currency in science.

The Journal Impact Factor (JIF) rules supreme, determined by averaging the citations received by papers in a specific journal over the previous two years. In the US and Canada, 40% of research-intensive institutions mention the Journal Impact Factor in their assessment regulations. In Europe and China, the JIF is also employed as one of the key metrics. Eugene Garfield, the creator of the JIF, likened it to nuclear energy: beneficial when used properly but often misused. To be honest, what we witness nowadays is a rampant abuse of metrics in academia.

Two blind spots

The Evaluation Game: How Publication Metrics Shape Scholarly Communication wants to make scholars and policymakers aware of two key blind spots in the discourse on publication metrics. The first one is the absence of the Soviet Union and post-socialist countries in the histories of measuring science and evaluating research. The other blind spot is a lack of geopolitical perspective in thinking about the contexts in which countries face the challenges of publish or perish culture.

Photo by Dan Cristian Pădureț on Unsplash. License

Counting scholarly publications has been practiced for two centuries. In Russia from the 1830s, professors had to publish yearly to determine their salaries. The Soviet Union and various socialist countries developed national research evaluation systems before the Western world. The effects of those practices are still vital.

Designing better metrics is not enough

I wrote The Evaluation Game to offer a fresh take on the origins and effects of metrics in academia, as well as to suggest ways to improve research evaluation. The book reveals that simply designing better and more comprehensive metrics for research evaluation purposes won’t be enough to halt questionable research practices like the establishment of predatory journals, guest authorship, or superficial internationalization, often seen as „gaming” the research evaluation systems. It’s not the metrics themselves, but the underlying focus on economics that’s driving the transformation of scholarly communication and academia itself.

With this book, I aim to demonstrate that a deeper understanding of the reasons behind the transformation of research practices can guide toward better solutions for governing academia and defining the values that should shape its management. This is a crucial task today, as pressures on academia continue to mount and more countries are either implementing or considering the introduction of national evaluation regimes.

My hope is that this book can help us gain a better understanding of the role that measurement and research evaluation play in science. It’s impossible to conduct publicly funded research without some form of evaluation (either ex-post or ex-ante). Given this reality, it’s essential for us to ask how we can influence science policy and develop more responsible technologies of power.

The Evaluation Game by Emanuel Kulczycki

Title: The Evaluation Game

How Publication Metrics Shape Scholarly Communication

Author: Emanuel Kulczycki

ISBN: 9781009351195

About The Author

Emanuel Kulczycki

Emanuel Kulczycki is Associate Professor at Adam Mickiewicz University, Poznań, and Head of the Scholarly Communication Research Group. From 2018 to 2020, he was the chair of the ...

View profile >
 

Latest Comments

Have your say!