Skip to content

[ RESEARCH INTEGRITY ] January 18, 2024

The robot uprising is here: is scholarly publishing ready?

This article originally appeared in Learned Publishing and is authored by Sami Benchekroun, CEO and co-founder, Morressier.

Key points:
• Research integrity is impossible to achieve without digital
innovation.
• Humans and machines have complementary strengths, which we
should harness for collaboration.
• We have backed ourselves into a corner with the way we have
built current publishing infrastructure.
• Today’s research integrity solutions are downstream, but innova-
tion requires integration throughout publishing infrastructure.

 

INTRODUCTION

Technology moves so fast that by the time the scholarly publishing industry has come to a consensus on innovations to support research integrity, the world will have moved on to the next technological revolution. We can debate, discuss, pilot, and experiment for years to come until each of those conversations is out of date. And while rigour is an essential cornerstone to the value and trustworthiness of this industry, I fear we are having the wrong conversation.

We find ourselves in a world that is conversely enriched and threatened by artificial intelligence (AI) and other digital innovations. Instead of debating risks and opportunities, let’s answer the question: how do we succeed?

 

INHERITED PROBLEMS

The complexities, interdependencies, and natural scepticism of the scholarly publishing community make embracing digital innovation in the research integrity space a challenge.

The first problem is science’s ‘consensus crisis’. The value of scientific output is established between scientists. Publishers and societies broker this consensus, and, with our peer review systems, try and strike a balance between enough rigidity to defend established scientific orthodoxy and enough flexibility to adapt to breakthrough research that reshapes our understanding. This consensus, naturally and appropriately, slows scientific progress. But today, consensus is under threat from polarization. Whether it is the global rise in populism, or the unwillingness to apply consensus to public health crises like COVID-19, consensus that once led to trust now leads to suspicion. This is perhaps the greatest impact of a research integrity crisis, as the very nature of scientific validation is under threat. 

Our second problem is that our infrastructure has been built in a way that does not prioritize adaptation, change, and diversity. The digital revolution has opened up a world of opportunities for scholarly publishing, but instead of taking advantage of the unique paths opened by technology, from format-free publishing, to the universal adoption of digital identifiers, to the globalization of published output, we have largely maintained the same processes and workflows. And the PDF still rules. The cracks in this infrastructure are showing, and research misconduct is what threatens to burst through the dam.

We are facing a polarized world with stagnating technology. The ability of researchers to use technology when producing research has far outpaced the publishing industry’s ability to evaluate research using technology. To put it simply: the risk of digital innovation is far less than the risk of being left behind.

 

UNTANGLING A GORDIAN KNOT

Why is technology not a silver bullet, solving all of the publishing industry’s problems? There are too many to list, from lack of trust, to the complexities of existing infrastructure that make transition challenging. Beyond that, many of our research integrity issues are cultural. That is not to say there are not technology solutions that would relieve the burden of these cultural issues, but it would be a mistake to ignore the tangles of the systems in which we currently operate.

Every year there are relatively fewer reviewers, as a fixed pool of people with relevant expertise are faced with evaluating more and more research. Our issues with scaling these workflows mean rushed reviewers, overworked reviewers, and more and more demand on authors to publish in the top ranked journals. Is it any wonder that corners are cut in the publishing process, or that authors might resort to misconduct to help advance their careers?

Further, we have built complex legacy systems that are not designed for change, and that cannot evolve easily with the times. This failure severely limits the publishing industry’s ability to grow and change, or even to migrate to better offerings easily.

Conversations of research integrity often carry an undercurrent of fear. Fear that we have gone too far down this path, and if we were to look ‘under the hood’ at the state of the scientific record, we might not like what we find. The issue of trust is a delicate balance. To admit that there are flaws in the process, in the public’s mind, might be the equivalent of saying all scientific discoveries or convictions of the last decades are invalid. In a polarized environment, it is better to project strength and confidence, even if that means avoiding the question altogether.

A Gordian Knot cannot be unravelled or untangled. It requires a firm hand and a strong blade to slice through the complexities and start fresh. There is an urgency to today’s research integrity challenges with extreme external pressure from AI. While we do not suggest literally slicing through today’s publishing infrastructure to unravel the knot, there are ways to get at the heart of what matters in publishing, and revisit why we exist as an industry.

 

IS IT CAUTION, OR IS IT STAGNATION?

To this general crisis in consensus, add a half pound of ChatGPT. The rapid advance of the latest version of OpenAI’s large language model (LLM) shocked leading figures across a variety of sectors to such an extent that they issued an open letter urging researchers to hit pause, as shared by the Future of Life Institute, on development of the next generation of AI. The letter contains this arresting sentence:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? (Future of Life Institute, 2023).

Consider the impacts of this question on consensus in science. And yet, I’m concerned that the incentives of the AI arms race are too great for this call for caution to be heeded. The robots are surely coming. To do nothing is to render scholarly publishing obsolete. In order to retain the value of the scientific record, we need adaptations in technology and behaviour that are nothing less than fully transformative. The only control we have, in a world with universal access to LLMs like ChatGPT, is to use them to exponentially enhance the ability of science to create discoveries and break-throughs that accelerate progress.

 

A CO-CREATED FUTURE

We often think of AI as a replacement for human tasks. Who does AI replace in the scholarly publishing process, and what are the impacts of those replacements on the integrity of research?

If AI replaces authorship, the act of drafting and formatting journal articles, then what happens? First of all, it seems more likely that this will be a drafting tool rather than an author-replacer. AI support for drafting seems like it might even the playing field: could an LLM make it easier for authors whose first language is not English? Could editorial staff and peer reviewers spend less time distracted by bad writing and more time focusing on evaluating whether the science is sound?

If AI replaces reviewers, there are a lot of potential questions. What biases crept into the criteria of the data set the LLM uses to evaluate science? Is it hard for machines to evaluate true break-throughs? On the other hand, is there a role for AI to play when it comes to checking for certain types of fraud or misconduct? We likely need the data analysis capabilities of AI/machine learning to evaluate data fabrication. There is a collaborative role for AI to play in the peer review process. As AI writing tools improve, AI identification tools must also improve to ensure that whatever unique ethical guidelines have been set for a particular journal are met.

The most exciting impact of AI on science publishing is the potential to mine data. We have seen examples in which scientists feed an LLM the past 50 years of published research, and ask it to predict the direction of the field for the next 5 years. The computing capabilities of an AI far outpace a human data scientist. Why would we not harness that potential, to double check our conclusions, to identify new areas of research potential? This type of partnership between human and machine could vastly improve the integrity of published research, and potentially accelerate scientific progress.

Return to the goal of publishing: sharing the most accurate and reliable information with the world, and supporting the expansion of our knowledge and understanding so we can progress. With AI as a tool, we are not risking research integrity, we are safeguarding it.

 

CONCLUSIONS

We need speed. We need integrity. Instead of allowing the role of AI to stay murky and ambiguous in the publishing process, let’s identify the power of human contributions and the power of machine contributions.

In some scholarly publishing circles, the word innovation incites eye rolls. That is because innovation does not go deep enough or far enough.

Fraud and misconduct are on the rise, and protecting research integrity is critical to scholarly publishing’s ability to provide answers to the world’s questions and solutions to the world’s problems. Technology has multiple roles to play, but integrity checks for plagiarism or paper-mill detection are simply
a bandaid. To truly innovate will require slicing through the complexities, the doubts, and the feeling that just because something is ‘how we have always done it’ does not mean it is how we have to do it going forward.

There is no research integrity without innovation. If we do not harness the power of emerging tools, we may fail in our stewardship of the scientific record.

 

CONFLICT OF INTEREST STATEMENT

Author is a non-executive director on the board of ALPSP.


REFERENCE

Future of Life Institute. (2023). Pause giant AI experiments: An open
letter. Future of Life Institute https://futureoflife.org/open-letter/
pause-giant-ai-experiments/

https://onlinelibrary.wiley.com/doi/10.1002/leap.1595

You may also like

View all articles
RESEARCH INTEGRITY July 31, 2024

Combatting Research Fraud: Why should you pay more attention to author verification?

Though authorship fraud is largely a bi-product of the funding and recognition structure within institutions where “publish-or-perish” is very much ...
RESEARCH INTEGRITY June 4, 2024

Morressier at SSP

The SSP meeting is an annual highlight for the scholarly communications community and we learned a great deal through the many conversations we had ...
RESEARCH INTEGRITY February 21, 2024

Can your eyes be trusted? Exploring image manipulation in research

Academia’s image manipulation epidemic Last week, we explored the troubling rise of data fabrication in academia, including a high-profile case ...
RESEARCH INTEGRITY February 9, 2024

Elevate your community’s integrity with this month’s releases

Boost your brand and ditch duplications As your event approaches, keeping your audience engaged with impactful details becomes crucial. This month, ...