Half of all researchers now use AI somewhere in their workflow. Nearly 5% disclose it.
That figure came up during the Prophy Predicts: Future of Academic Assessment panel in March 2026. The gap between adoption and transparency is not a rounding error—it is a structural problem sitting in the middle of every editorial office trying to do its job.
AI disclosure in research has become one of the defining challenges for scientific publishers over the next two to five years. Not because the rules are unclear, but because even where they are clear, compliance is not happening.
Journal editors are describing a "deluge" of incoming submissions. AI has lowered the barrier to producing research manuscripts—and the volume of submissions is rising faster than editorial teams can process them.
Most editorial offices are working with already-strained staff. The additional burden of screening for AI-generated or AI-assisted content—on top of standard peer review logistics—is compounding the pressure. As noted during the panel discussion, some journals will absorb this load; others will buckle under it.
The submission pile tends to fall into three rough categories:
These are the manuscripts that consume editorial time and judgment. They could be legitimate research from a non-native English speaker using imprecise phrasing, or they could be AI-generated content dressed up convincingly. Editors cannot always tell, and that uncertainty carries a heavy cost.
The disclosure gap is not mainly a problem of bad intent. Research identifies a constellation of structural reasons why researchers avoid disclosing AI use—even when they know they should.
Publishers have developed different, and sometimes contradictory, frameworks for what requires disclosure. When expectations differ across journals, many researchers default to silence rather than guessing wrong.
Studies show that readers often rate AI-assisted manuscripts lower than identical content presented without disclosure—even when the quality is the same. Transparency may genuinely disadvantage work during review.
Most publishers allow AI for editing and summarising but require disclosure for "meaningful" shaping of content. As observed during the panel, this sounds simple, but lurking behind that simplicity is a significant amount of complication.
Detection tools remain unreliable, with false-positive rates capable of flagging legitimate work—particularly from multilingual researchers. In an environment relying on voluntary disclosure, under-disclosure becomes rational behavior.
One aspect of the disclosure conversation that deserves more attention is the equity dimension. It was raised during the panel that a manuscript that reads "slightly off" might reflect AI assistance, or it might reflect a researcher writing in their second or third language who did not choose the most colloquial phrasing.
The current landscape disproportionately burdens researchers from under-resourced institutions. Strict or ambiguous disclosure expectations risk penalizing precisely the researchers who have the most legitimate uses for AI as a writing aid.
Key takeaway: The goal is transparency about meaningful AI contribution to the science, not surveillance of language-assistance tools.
One principle offered during the panel applies directly here: set targets you want people to game. "We want people to be transparent, so set that as a target, and they'll game it, and it's great."
If the current system creates incentives for non-disclosure, the design of the system is the problem. To increase disclosure, publishers must make it the path of least resistance:
The journals that handle the AI transition well will be the ones that survive it in good standing. A central prediction from the panel is that the research integrity confusion created by the "AI flood" will push serious researchers toward publishers with established brand credibility.
"Those brands have been built over hundreds of years," it was noted during the discussion. "There's something about the power of that gravitas."
For publishers, this is both a warning and an opportunity. Journals that manage AI disclosure with clear policy and fair enforcement will be better positioned to retain credibility.
Prophy's AI in Peer Review implementation guide explores how publishers are integrating AI into the editorial workflow without compromising quality, while our analysis of research integrity and peer review covers the broader landscape of trust signals.
AI disclosure in research is not a problem that will resolve itself. As AI becomes woven into every stage of research, individual acts of use become less salient, making disclosure decisions even less likely.
The structural solution is not better detection; it is better design. This means:
The 95% who are not disclosing are not the enemy of research integrity. The systems that make disclosure feel risky or unclear are.
Want to see how Prophy supports publishers in managing editorial quality at scale? Request a demo at prophy.ai.