Finding qualified peer reviewers for research proposals is one of the most challenging aspects of...
How Emerging Research Fields Challenge Traditional Peer Review
Five years ago, large language models (LLMs) didn't exist as a research field. Today, the volume of papers is so vast that even specialists cannot read all the titles, let alone the content.
This isn't gradual growth. It's what we call revolution-like expansion—when a field transforms from near-nothingness into a vast territory almost overnight. And it exposes something fundamental about how peer review was built: for a world that no longer exists.

When Decades of Relationships Become Obsolete
A major scientific society publisher recently approached us with a curious problem. They were launching a new journal series—exciting interdisciplinary territory at the intersection of established fields. They had tens of thousands of reviewers in their database, accumulated over decades of careful relationship building.
None of them were useful.
When you share knowledge from one journal to another, the new journal doesn't know these people. They don't match the topics. The publisher needed us to build an initial reviewer pool from scratch.
What suddenly became obsolete:
- Extensive editorial networks
- Maintained reviewer spreadsheets
- Decades of institutional knowledge
- Subject-specific contact databases
This pattern repeats. Traditional networks built on human memory and manual curation fail when fields expand exponentially or disciplines merge unexpectedly.
The Cognitive Boundary We All Hit
As a researcher, you can follow your narrow subfield. You know the major papers, recognize the names, catch the developments. As an editor, your view must be much broader—scanning entire fields, not just research niches.
For interdisciplinary journals, you multiply these angles:
- Methods from economics applied to infectious disease
- Statistical approaches from climate science imported into social networks
- Computational physics solving biological problems
- Machine learning reshaping every discipline simultaneously
The combinations expand faster than human cognition can track. It's objectively impossible. Humans have cognitive limits. Computers don't care whether they process 100 records or 100,000.
The question becomes: what should machines do, and what requires human judgment?
How Science Actually Invents Its Own Language
Traditional search relies on keywords. You look for people who published on "tuberculosis" or "machine learning” or "quantum computing."
This breaks down in emerging fields because researchers are still inventing the language. The terminology hasn't stabilized. The boundaries remain fluid.
We built something different: semantic analysis that discovers how scientists actually describe their work. Our system creates a dynamic ontology—analyzing over 180 million articles to build a living map of research concepts.
When a term appears frequently—say, 50 articles using it consistently—we recognize it as an emerging concept. We don't invent how people describe new fields. We discover how they do it, in real time, as the language emerges.
Science doesn't wait for official taxonomies. Researchers pursue interesting questions, develop vocabulary organically, create territories that resist categorization.
When Methods Matter More Than Subjects
A case: A university press publisher focusing on infectious disease encountered something interesting. They needed reviewers who understood specific methodological approaches—regardless of which disease the method was applied to.
The virus is different, they told us, but that's not important. The methodology matters more than which pathogen the researcher studied.
Traditional subject classification misses this:
- A statistical regression expert on tuberculosis can evaluate methods applied to COVID
- An imaging specialist from cardiology can assess similar techniques in neurology
- A computational modeler from climate science can review parallel approaches in epidemiology
When we suggested reviewers based on methodological similarity rather than disease focus, experts immediately recognized the relevance. Connections invisible to non-specialists but obvious to those who understood the landscape.
A Database That Moves With Science
Large language models (LLMs) like ChatGPT work from static training data, typically months behind current research. For rapidly evolving fields, this lag makes them unsuitable for tracking expertise.
Our architecture operates differently. We process up to a million updates daily—new papers published, citation counts updated, preprints moving to formal publication. Analysis happens continuously in the background. Papers appear in our database within 24 hours of release.
This isn't about speed for its own sake. It's about maintaining an accurate picture of who is actively working on what, right now. In fields experiencing exponential growth, being six months out of date means chasing yesterday's questions.
The European Research Council uses our system for Synergy Grants—proposals where three or four groups from different disciplines tackle problems too large for individual researchers. These require diverse reviewer panels covering multiple specialties and the intersections between them. The scale and complexity make manual management impractical.
Why All Classification Systems Eventually Fail
Consider a classic question: Should mathematics be classified within natural sciences?
Mathematics provides the language all natural sciences speak—arguing for inclusion. Yet mathematics doesn't describe the physical world. It describes highly abstract structures that once connected to observable reality but have since become something else entirely.
Any taxonomy eventually fails. It draws artificial boundaries somewhere, and the most interesting research moves across those boundaries.
How knowledge actually evolves:
- New questions emerge at intersections of established fields
- Methods migrate across disciplinary boundaries
- The same mathematical framework applies to economics, physics, and biology
- Breakthrough insights come from unexpected conceptual transfers
Our approach examines semantic fingerprints rather than predetermined classifications. We might identify two distinct concept clusters—recognizing a paper discusses both subject A and subject B without forcing it into existing boxes.
The breakthrough questions often sit precisely where our classification systems try to draw lines.
What Peer Review Could Become
The current system strains beyond its design limits. Too many papers. Too few reviewers. Insufficient incentives.
We envision something more granular. Rather than asking one reviewer to assess everything, you ask multiple specialists to address specific questions where they have genuine expertise:
- One reviewer evaluates statistical methods
- Another assesses whether a borrowed technique was applied correctly
- A third examines broader significance
- Each contributes targeted judgment
This resembles Wikipedia's model: multiple contributors refining understanding toward truth. Currently impractical given how rapidly papers appear. But pointing toward a future where collaborative knowledge refinement happens at scale, where expertise becomes more distributed and specific.
Where Human Judgment Remains Essential
Because breakthrough science, by definition, presents ideas AI has never encountered.
We don't believe AI can evaluate genuinely novel ideas. That's what research—especially grant proposals—should be about. Large language models remain fundamentally limited to patterns in their training data. They recognize variations on known themes but struggle with radical novelty.

The promising vision:
- AI handles routine tasks: pre-submission checks, formatting, conflict detection
- AI generates preliminary analysis: summaries, initial reviews, structural assessments
- Humans provide critical judgment: novelty evaluation, methodological rigor, breakthrough potential
- Humans verify AI output: cross-checking, validation, refinement
The future of peer review is probably synergy between humans and AI. Our role becomes identifying the right human reviewers to verify, cross-check, and validate AI-generated analysis.
This makes human judgment more valuable, not less. The scarce resource isn't processing power but genuine insight into what constitutes a meaningful advance.
The Cost of Rejection Without Review
Many publishers desk-reject papers before peer review. Sometimes the scope doesn't match. Sometimes editors judge it unlikely to pass review.
We believe this approach is fundamentally wrong. Only papers that miss the journal's focus or have research integrity issues should be rejected at entrance. Everything else deserves feedback allowing authors to adapt and reach peer review.
The problem is we don't know what those rejected papers contained. Were they rightfully or wrongfully rejected? How many early-career researchers abandoned science after repeated desk rejections? How much potentially valuable work never saw light because it didn't fit neatly into existing categories?
We don't know how many people became disillusioned and left. Potentially good scientists. Potentially good science.
AI-driven pre-submission checks could transform this dynamic. Comprehensive but inexpensive automated review catches formatting and basic issues. Rather than rejecting papers, the system provides guidance: "Here are the blockers. Address these and your paper will be accepted for peer review."
This shifts the relationship from adversarial to collaborative—publishers and authors working together to present research in its best light.
Technology That Evolves With Science
Emerging fields challenge us because they're fluid. Concepts, terminology, and boundaries shift as fields develop. Solutions must evolve at the same pace.
Our dynamic ontology doesn't wait for fields to stabilize. It recognizes emerging patterns as they happen, adapting in real time. This mirrors how science actually works—not through top-down decree but through bottom-up emergence as researchers pursue questions and develop shared language.
We don't tell science how to organize itself. We watch how it organizes itself and build tools that recognize those organic patterns.
The terminology emerges through use. The boundaries get negotiated through practice. The structure reveals itself through accumulated work.
The Path Forward
The challenge of emerging research fields isn't temporary disruption. It's the new normal in an age of accelerating knowledge creation and increasingly interconnected disciplines.
For publishers launching journals in emerging areas: Your existing database may offer less help than expected. You need tools that map expertise based on actual research content, not subject classifications.
For funding agencies evaluating interdisciplinary proposals: Assemble panels with genuinely complementary expertise—not just representatives from each discipline, but reviewers who understand specific intersections.
For editors managing overwhelming volumes: Focus human attention where it matters—assessing novelty, evaluating rigor, identifying breakthroughs—while technology handles coordination.
Success depends on seeing technology as an enabler of better expertise matching—putting the right minds on the right problems, at the right time, with the right context.
In an age when the most important questions emerge at disciplinary intersections, when fields appear faster than we can name them, when yesterday's categories fail to contain tomorrow's insights—the ability to map expertise dynamically, semantically, in real time becomes essential.
The alternative is watching good science disappear not because it lacks value but because our organizational systems couldn't recognize it.
.png?width=337&height=96&name=Logo%20(1).png) 
      
      
    
   
          
           
     
      
       
     
    