What Researchers Want From Peer Review: Researcher to Reader 2026 Recap
Earlier this year, I had the chance to join a panel at the Researcher to Reader Conference—held in London on 24–25 February 2026. The session focused on something that sits at the heart of what Prophy works on every day: making peer review actually work for researchers.
The panel brought together a sharp group of people—Jayne Marks, Ritu Dhand from Springer Nature, Stewart Manley from Maynooth University, and Christopher Leonard from Cactus Communications.
The question we were trying to answer:
What do authors and readers actually want from peer review, and why isn't the system delivering it?

Image: Screenshot via Cassyni.
The Reviewer Pool Problem is Real
One of the clearest threads through the conversation was geographic imbalance. The global research community has expanded enormously over the past 25 years—particularly across Asia—but editorial boards and reviewer pools haven't kept pace. Researchers in China, India, and other regions often want to review, but they're under-invited. The system defaults to familiar names in familiar places.
This creates a feedback loop: the same small group of reviewers gets asked repeatedly, reviewer fatigue sets in, and turnaround times suffer. Meanwhile, capable researchers elsewhere don't get the chance to contribute—or build the credibility that comes from reviewing.
Early-Career Researchers Need a Seat at the Table
We also spent time on ECRs. Peer review is a learned skill, and most early-career researchers have never been properly mentored through it. At the same time, reviewing doesn't count toward tenure or promotion in most institutions, so there's little structural incentive to invest time in it.
That's a problem that publishers, funders, and institutions need to solve together. Certificates and APC discounts are a start, but they're not enough.
AI as a Tool for Wider Discovery
There was genuine optimism in the room about what technology can do here—particularly around reviewer matching. Modern systems can use content similarity to surface qualified reviewers beyond the usual names. Large language models, when used thoughtfully, can return geographically and gender-diverse suggestions that a manual search might miss.
The key word is "thoughtfully." Confidentiality matters when manuscripts are involved, and the panel was clear that ethical AI use—including private modes that prevent text from entering training pipelines—needs to be part of any publisher's workflow.
This is territory Prophy works in directly. Our Referee Finder is built for exactly this: finding the right reviewers from a database of 95M+ researcher profiles, matched semantically to the manuscript at hand.
Rethinking the submission model
Some of the most interesting discussion came around alternative models: cascading peer review, preprint-based review services, and marketplace-style matching where journals express interest in reviewed manuscripts rather than authors chasing journals serially. None of these are new ideas, but there's growing appetite to actually test them.
The session was a useful reminder that the peer review system isn't broken so much as it's under-resourced and under-designed for the scale it now operates at. The fixes exist. What's needed is coordination.
If you want to watch the full session, it's available on Cassyni.
Gareth Dyke is Partnership Director at Prophy. He has published more than 380 peer-reviewed articles and has served as Editor-in-Chief at Historical Biology for 18 years.