December 8, 2025
2 min read
The International Committee of the Red Cross warned that artificial intelligence models are making up research papers, journals and archives
By Dan Vergano edited by Claire Cameron

Alexander Spatari/Getty Images
Join Our Community of Science Lovers!
Never heard of the Journal of International Relief or the International Humanitarian Digital Repository? That’s because they…
December 8, 2025
2 min read
The International Committee of the Red Cross warned that artificial intelligence models are making up research papers, journals and archives
By Dan Vergano edited by Claire Cameron

Alexander Spatari/Getty Images
Join Our Community of Science Lovers!
Never heard of the Journal of International Relief or the International Humanitarian Digital Repository? That’s because they don’t exist.
But that’s not stopping some of the world’s most popular artificial intelligence models from sending users looking for records such as these, according to a new International Committee of the Red Cross (ICRC) statement.
OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and other models are befuddling students, researchers and archivists by generating “incorrect or fabricated archival references,” according to the ICRC, which runs some of the world’s most used research archives. (Scientific American has asked the owners of those AI models to comment.)
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
AI models not only point some users to false sources but also cause problems for researchers and librarians, who end up wasting their time looking for requested nonexistent records, says Library of Virginia chief of researcher engagement Sarah Falls. Her library estimates that 15 percent of requests are now for hallucinated citations, including for both published works and unique primary source documents. “For our staff, it is much harder to prove that a unique record doesn’t exist,” she says.
This is not the first time AI has been caught making up false citations. The ICRC recommends that people consult online catalogs or references in existing published scholarly works to find references to real studies instead of assuming anything cited by an AI is real, no matter how authoritative it might sound. The Library of Virginia will be asking researchers to vet their sources for these requests, Falls says, and to disclose if a source originated from AI. “We’ll likely also be letting our users know that we must limit how much time we spend verifying information.”
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.