Researchers may have discovered a new and potentially controversial method to influence the review process of their academic papers by inserting hidden prompts designed to sway AI tools into providing positive feedback. A report by Nikkei Asia revealed that 17 preprint research papers hosted on the open-access platform arXiv contained these hidden messages. The authors behind the papers came from 14 institutions across eight countries, including well-known names such as Columbia University, the University of Washington, Waseda University in Japan, and KAIST in South Korea.
Most of the flagged papers were from the field of computer science. The prompts, often just a sentence or two long, were deliberately concealed using tactics like white text or tiny font sizes that would be invisible to the human eye but readable by AI. These prompts urged AI tools to provide only positive reviews, praising the papers for their novelty, quality, and impact. This tactic appears to capitalise on the growing use of AI in peer review and summarisation tasks.
The strategy has sparked ethical concerns, especially as AI becomes more integrated into academic workflows. Reviewers increasingly rely on AI to help process the large volume of submissions, raising the risk that such hidden messages could influence automated assessments without human reviewers ever knowing. The presence of these prompts challenges the trustworthiness of peer review and highlights a growing tension between human and machine involvement in research evaluation.
Interestingly, at least one researcher has defended the tactic. A professor at Waseda University told Nikkei Asia that the hidden prompt was meant to act as a countermeasure against what they called “lazy reviewers” who depend too heavily on AI. Their argument suggests that if AI is being used to review papers despite many conferences prohibiting it, then authors should have the right to guide how those tools interpret their work.
This development exposes a grey area in academic publishing where AI’s growing role is outpacing the rules that govern it. As researchers, journals, and institutions scramble to catch up, it raises a deeper question: in a world where machines are part of the review process, how do we keep academic integrity intact?