Tools built on large language models often use many words to say very little. We should be concerned that what truly means something risks being lost in a cloud of AI-generated filler.
When commerce moved online, SEO experts became necessary. An SEO is a specialist in search engine optimization for product visibility. Now artificial intelligence is changing digital services from all sides. In response, we can now expect “generative engine optimization” (GEO).
If AI is just talking without saying anything important, and science is fundamentally about communicating what we have discovered, we now face the same challenge as the advertising industry:
Should scientists also optimize their research results for AI?
It feels both sad and cynical to talk about research as a product that needs advertising and visibility. First, there is the ideal that the truth speaks for itself – clearly and distinctly – and does not need help to become visible. Second, the very process of arriving at research results has an intrinsic value.
But perhaps we really have no choice but to embrace GEO-based research production. AI and research already have a strained relationship for many reasons. Here are three aspects of this problematic relationship.
1. AI (mis)use in Research Assessment
Our research applications are perhaps most reminiscent of a product to be sold. We scrutinize the evaluation committees and criteria carefully to find clues to what they understand and can emphasize in the assessment. We would like to communicate our idea in a language that the reader understands. But what if we are not writing for a reader, but for AI?
The Norwgeian Research Council is already using AI in some evaluations of applications (FRIPRO). And they are not alone among research funders. What could possibly go wrong?
There is a little experiment you can try yourself: Open Google image search and search for “non red dress”. I bet you will get a lot of red dresses. So, would writing “this is not an application in the social sciences” be an equally surefire way to match us with social scientists?
The use of precise words can have a disproportionately large impact on how our project is evaluated.
The use of precise words can have a disproportionately large impact on how our project is evaluated.
2. Misuse of research results by AI
Research articles and books are uncritically sucked in by developers of large language models. Everything you have written and posted online is fed into chatbot models to give them the ability to create convincing scientific formulations. The material is exploited because the developers of AI chatbots do not pay for it.
Everything you have written and posted online is fed into chatbot models to give them the ability to create convincing scientific formulations.
Let’s focus on publicly available research, although paywalls are often not sufficient protection against LLM greed. Publicly available research is publicly funded and created by researchers. Does that automatically mean that commercial AI actors have the right to use it?
It’s about the scope of use – publicly available to individuals does not mean publicly available in unlimited volume. When I go to a museum and buy a ticket, the museum knows how many tickets it wants to sell and thus balances the number of visitors against its capacity. What if a hundred thousand people come in on one ticket? The infrastructures weaken and become unsustainable.
This creates two challenges for research publications:
Research publishing depends on an information infrastructure. An archive, library, or publisher must now use resources from existing revenues to protect their collections from bot attacks. This comes at the expense of their core services.
What’s the point of producing quality articles for good journals if those journals disappear and everything ends up in one big LLM anyway? We may already be seeing the beginnings of this. For years, researchers in the tech industry have favored open platforms like arXiv.org for publishing. We cite them because their employer has become the seal of quality – not the peer review.
Research publications are written for experts in the same field. What happens when the “reader” devours thousands of articles from hundreds of disciplines? Those articles that happen to fit the way large models extract patterns will become disproportionately more visible in the model. Because citation counts for researchers are linked to quality assessment, what does this do to citation counts, h-indexes, and all that? Research that is found by an AI chatbot is cited.
What happens when the “reader” devours thousands of articles from hundreds of disciplines?
3. AI Abuse by Researchers
The use of AI in the research process is spreading rapidly. So far, it seems that AI is helping us get more done – but again in quantity, not quality. But AI abuse by researchers has been covered a lot in both Khrono and Nature.
One positive side is that researchers who have to publish in a language that is not their native language are helped to express themselves more clearly.
The negative side? For example, the program committees at the largest AI conferences who say that people now take their article, rewrite it five different times with AI and submit all five versions to conferences, so that five different groups of peers have the chance to like and approve it.
Peer review is no longer enough – we now need peer police.
Peer review is no longer enough – we now need peer police.
We find ourselves in a classic game theory scenario, more precisely a prisoner’s dilemma. If none of us use AI to produce research papers, we live in the world we know – we go free.
Peer review is a system with flaws, but there are known flaws, and we can continue to improve it. But the payoff from using AI is personal.
If I use AI and my colleagues don’t, I gain an advantage. I can publish more papers, and with the objective measures we use – more pay and prestige for me. Conversely, if I don’t use AI and everyone else does, no matter how skilled I am as a researcher and writer, I will never produce enough volume to be seen. And no one is valued without being seen.
Thus, we are stuck in a game of equilibrium where we use more and more AI to write papers and more and more AI to evaluate them.
So what should we do? Viewing this as an individual choice is the wrong approach.
In a game-theoretic scenario, the best solution for all of us may not be achievable when we act rationally individually, but only when we act collectively. Coordinated action requires trust and the power to exercise control.
Coordinated action requires trust and the power to exercise control.
We have neither now. We have constant doubt about the existing peer review process. We have declining support for quality-assured publishing and no protection for our publications.
But, what is both a privilege and a curse for academia is that we have more autonomy than any other sector of society. It also gives us the other way out of the GEO research crisis – namely, to understand the incentives that make AI abuse attractive, and change them collectively.
Until then: GEO-optimize your research output.
The op-ed was orginally published in Norwegian in Khrono.
Photo: Eivind Senneset, UiB.