AI is turning scientists into publishing machines—and quietly funneling them into the same crowded corners of research.
That’s the conclusion of an analysis of more than 40 million academic papers, which found that scientists who use AI tools **in their research **publish more papers, accumulate more citations, and reach leadership roles sooner than peers who don’t.
But there’s a catch. As individual scholars soar through the academic ranks, science as a whole shrinks its curiosity. AI-heavy research covers less topical ground, clusters around the same data-rich problems, and sparks less follow-on engagement between studies.
The findings highlight a tension between personal career advancement and collective scientific progress, as tools such as [ChatGPT](https://spectrum.ieee.o…
AI is turning scientists into publishing machines—and quietly funneling them into the same crowded corners of research.
That’s the conclusion of an analysis of more than 40 million academic papers, which found that scientists who use AI tools **in their research **publish more papers, accumulate more citations, and reach leadership roles sooner than peers who don’t.
But there’s a catch. As individual scholars soar through the academic ranks, science as a whole shrinks its curiosity. AI-heavy research covers less topical ground, clusters around the same data-rich problems, and sparks less follow-on engagement between studies.
The findings highlight a tension between personal career advancement and collective scientific progress, as tools such as ChatGPT and AlphaFold seem to reward speed and scale—but not surprise.
“You have this conflict between individual incentives and science as a whole,” says James Evans, a sociologist at the University of Chicago who led the study.
And as more researchers pile onto the same scientific bandwagons, some experts worry about a feedback loop of conformity and declining originality. “This is very problematic,” says Luís Nunes Amaral, a physicist who studies complex systems at Northwestern University. “We are digging the same hole deeper and deeper.”
Evans and his colleagues published the findings January 14 in the journal Nature.
A longstanding interest in how science evolves****
For Evans, the tension between efficiency and exploration is familiar terrain. He has spent more than a decade using massive publication and citation datasets to quantify how ideas spread, stall, and sometimes converge.
In 2008, he showed that the shift to online publishing and search made scientists more likely to read and cite the same highly visible papers, accelerating the dissemination of new ideas but narrowing the range of ideas in circulation. Later work detailed how career incentives quietly steer scientists toward safer, more crowded questions rather than riskier, original ones.
Other studies tracked how large fields tend to slow their rate of conceptual innovation over time, even as the volume of papers explodes. And more recently, Evans has begun turning the same quantitative lens on AI itself, examining how algorithms reshape collective attention, discovery, and the organization of knowledge.
That earlier work often carried a note of warning: The same tools and incentives that make science more efficient can also compress the space of ideas scientists collectively explore. The new analysis now suggests that AI may be pushing this dynamic into overdrive.
AI’s impact on careers and research topics
To quantify the effect, Evans and collaborators from the Beijing National Research Center for Information Science and Technology trained a natural language processing model to identify AI-augmented research across six natural science disciplines.
Their dataset included 41.3 million English-language papers published between 1980 and 2025 in biology, chemistry, physics, medicine, materials science, and geology. They excluded fields such as computer science and mathematics that focus on developing AI methods themselves.
The researchers traced the careers of individual scientists, examined how their papers accumulated attention, and zoomed out to consider how entire fields clustered or dispersed intellectually over time. They compared roughly 311,000 papers that incorporated AI in some way—through the use of neural networks or large language models, for example—with millions of others that did not.
AI adoption boosts individual scientific impact, with AI-using researchers consistently earning more citations than those who do not use AI.Veda C. Storey
The results revealed a striking trade-off. Scientists who adopt AI gain productivity and visibility: On average, they publish 3 times as many papers, receive nearly 5 times as many citations, and become team leaders a year or two earlier than those who do not.
But when those papers are mapped in a high-dimensional “knowledge space,” AI-heavy research occupies a smaller intellectual footprint, clusters more tightly around popular, data-rich problems, and generates weaker networks of follow-on engagement between studies.
The pattern held across decades of AI development, spanning early machine learning, the rise of deep learning, and the current wave of generative AI. “If anything,” Evans notes, “it’s intensifying.”
Intellectual narrowing isn’t the only unintended consequence either. With automated tools making it easier to mass-produce manuscripts and conference submissions, journal editors and meeting organizers have witnessed a surge in low-quality and fraudulent papers or presentations, often produced at industrial scale.
“We’ve become so obsessed with the number of papers [that scientists publish] that we are not thinking about what it is that we are researching—and in what ways that contributes to a better understanding of reality, of health, and of the natural world,” says Nunes Amaral, who detailed the phenomenon of AI-fueled research paper mills last year.
Automating the most tractable problems****
Aside from recent publishing distortions, Evans’s analysis suggests that AI is largely automating the most tractable parts of science rather than expanding its frontiers.
Models trained on abundant existing data excel at optimizing well-defined problems: predicting protein structures, classifying images, extracting patterns from massive datasets. Some systems have also begun to propose new hypotheses and directions of inquiry—a glimpse of what some now call an “AI co-scientist.”
But unless they are deliberately designed and incentivized to do so, such systems—and the scientists who rely on them—are unlikely to venture into poorly mapped territories where data are scarce and questions are messier, Evans says. The danger is not that science slows down, but that it becomes more homogeneous. Individual labs may race ahead, while the collective enterprise risks converging on the same problems, methods, and answers—a high-speed version of the same narrowing Evans first documented when search engines replaced library stacks.
“This is a really scary paper to think about in terms of how the second- and third-order effects of using AI in science play out,” says Catherine Shea, a social psychologist who studies organizational behavior at Carnegie Mellon University’s Tepper School of Business in Pittsburgh.
“Certain types of questions are more amenable to AI tools,” she notes. And in an academic environment in which papers are the main currency of success, researchers naturally gravitate toward the problems that are easiest for these tools to crank through and turn into publishable results. “It just becomes this self-reinforcing loop over time,” Shea says.
Could the narrowing be temporary?****
Whether this trend persists may depend on how the next generation of AI tools is built and deployed across scientific workflows.
In a paper published last month, Bowen Zhou and his colleagues at the Shanghai Artificial Intelligence Laboratory in China argued that the application of AI in science remains fragmented, with data, computation, and hypothesis-generation tools often deployed in a siloed and task-specific fashion, limiting knowledge transfer and blunting transformative discovery. But when those elements are integrated, AI-for-science systems help expand scientific discover, says Zhou, a machine-learning researcher who previously served as chief scientist of the IBM Watson Group.
Perhaps, says Evans. But he doesn’t think that the problem is baked into the algorithmic design of AI. More than technical integration, he argues, what may matter most is overhauling the reward structures that shape what scientists choose to work on in the first place.
“It’s not about the architecture per se,” Evans says. “It’s about the incentives.”
Now, says Evans, the challenge is to deliberately redirect how AI is used and rewarded in science: “In some sense, we haven’t fundamentally invested in the real value proposition of AI for science, which is asking what it might allow us to do that we haven’t done before.”
“I’m an AI optimist,” he adds. “My hope is that this [paper] will be a provocation to using AI in different ways”—ways that expand the kinds of questions scientists are willing to pursue, rather than simply accelerating work on the most tractable ones. “This is the grand challenge if we want to be growing new fields.”