A global survey of researchers, conducted by Wiley, reveals a striking momentum of AI adoption among researchers. Along with this increased use comes calls for clear guidelines to assist researchers in using AI responsibly. Wiley has responded to the calls with comprehensive guidelines for authors, editors, and reviewers, and has introduced a “chat” series in order to discuss issues surrounding AI and research with experts.
In a global survey of 2,430 researchers, expanded and updated since 2024, Wiley assesses the role of AI in research, revealing insights into where researchers are (and are not) “embracing artificial intelligence”. Among the key findings are an increase from 2024 to 2025 of researchers using AI for any aspect of their work (from 57% to 84%), and for specific research and/or publication-related tasks, an increase from 45% to 62%.
With this increased use comes challenges, not least identifying when to use or not use AI, when to disclose the use, and how to best understand the implications of these new tools and the impact they have on any given task.
In a new “chat” series that explores AI use in academic publishing and research, Ian Mulvany, Chief Technology Officer at the BMJ Group, explains that in this industry of knowledge creation, in the value chain of “reading, writing, researching, collaborating on documents, refining your documents, writing the code, writing the paper, having it published, all the way through to the dissemination […] there’s almost no point in the value chain where you might not imagine that these tools could have an impact”.
The most common tools used for this increased AI use are general-purpose tools like ChatGPT, rather than those specialised for science and research, and lack of access to paid tools is a barrier to many researchers.
This tendency to go to open tools such as ChatGPT comes with its own challenges: According to Mulvany, “the very first misconception that I saw when I noticed people using these tools was treating them like search engines. […] If you’re working with a tool in a way where it’s doing a search on a literature database, then you can be pretty certain that the results that are coming back are going to be reliable and robust. But if you’re depending on […] the raw power of the LLM itself for what it may know, and you think it’s acting like a search engine, that’s going to bring you into a lot of trouble”.
In a second “chat” about the practical realities researchers face when integrating AI into their work, Avi Staiman, CEO of Academic Language Experts, confirms this: “One of the big things I heard when I was working with researchers is a lot of them aren’t sure how to confidently and competently use these tools. […] I would recommend actually considering thinking about using them as brainstorming tools”, ensuring that the tasks they are used for can be fact-checked and subject to human scrutiny and oversight, for example.
But for researchers who are eager to use these new tools but lack the experience or knowledge of how they work, diving in can be daunting. According to the findings of Wiley’s survey, “close to three-quarters of researchers call for scholarly publishers to provide clear guidelines about which uses of AI are acceptable and to help researchers avoid potential pitfalls and errors”.
According to Staiman, “we’re still at the point where there’s this kind of hush, hush, like, I know you’re using it, you know I’m using it, but we’re just not going to talk about it. It’s a little bit of an elephant in the room”. Wiley has responded to researchers’ call for guidance, and has opened up the possibility for responsible transparency, with a guide for research authors, editors, and reviewers, providing support for,
- using AI responsibly,
- protecting your work,
- navigating policies confidently,
- choosing the right tools,
- and applying AI effectively.
In the foreword to the final report, ExplanAItions 2025: The Evolution of AI in Research, Matthew Kissner, President and CEO of Wiley, eplains that the “report offers a roadmap grounded in real experiences [… and] breaks down the immediate opportunities, emerging possibilities, and longer-term aspirations. Most importantly, it reaffirms what we’ve always known: that technology tools, however powerful, exist to amplify human curiosity, creativity, and judgement. Using them wisely will help great science achieve greater impact. […] The future of research will be shaped by the choices we’re making together now”.