Country-level map of ChatGPT’s ranking of “Where are people more artsy”. Credit: Platforms and Society (2026). DOI:10.1177/2976862425140891
New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favors wealthier, Western regions in response to questions ranging from "Where are people more beautiful?" to "Which country is safer?"—mirroring long-standing biases in the data they ingest.
The study, "[The Silicon Gaze: A typ…
Country-level map of ChatGPT’s ranking of “Where are people more artsy”. Credit: Platforms and Society (2026). DOI:10.1177/2976862425140891
New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favors wealthier, Western regions in response to questions ranging from "Where are people more beautiful?" to "Which country is safer?"—mirroring long-standing biases in the data they ingest.
The study, "The Silicon Gaze: A typology of biases and inequality in LLMs through the lens of place," by Francisco W. Kerche, Professor Matthew Zook and Professor Mark Graham, published in Platforms and Society, analyzed over 20 million ChatGPT queries.
Patterns of bias in ChatGPT responses
Across comparisons, the researchers found that ChatGPT tended to select higher-income regions such as the United States, Western Europe, and parts of East Asia as "better," "smarter," "happier," or "more innovative." Meanwhile, large areas of Africa, the Middle East, and parts of Asia and Latin America were far more likely to rank at the bottom.
These patterns were consistent across both highly subjective prompts and prompts that appear more objective.
To make these dynamics visible, the researchers produced maps and comparisons from their 20.3-million-query audit. For example:
- A world map ranking "Where are people smarter?" places almost all low-income countries, especially Africa, at the bottom.
- Neighborhood-level results in London, New York and Rio show ChatGPT’s rankings closely align with existing social and racial divides, rather than meaningful characteristics of communities.
Public tools and expert commentary
The research team has created a website at inequalities.ai where anyone can explore how ChatGPT ranks their own country, city or neighborhood across topics such as food, culture, safety, environment, or quality of life.
Mark Graham, Professor of Internet Geography, said, "When AI learns from biased data, it amplifies those biases further and can broadcast them at scale. That is why we need more transparency and more independent scrutiny of how these systems make claims about people and places, and why users should be skeptical about using them to form opinions about communities. If an AI system repeatedly associates certain towns or cities or countries with negative labels, those associations can spread quickly and start to shape perceptions, even when they are based on partial, messy or outdated information."
Implications and structural causes of bias
Generative AI is increasingly used in public services, education, business and everyday decision-making. Treating its outputs as neutral sources of knowledge risks reinforcing the inequalities the systems mirror.
The authors argue that these biases are not errors that can simply be corrected, but structural features of generative AI.
LLMs learn from data shaped by centuries of uneven information production, privileging places with extensive English-language coverage and strong digital visibility. The paper identifies five interconnected biases—availability, pattern, averaging, trope and proxy—that together help explain why richer, well-documented regions repeatedly rank favorably in ChatGPT’s answers.
The researchers call for greater transparency from developers and organizations using AI, and for auditing frameworks that allow independent scrutiny of model behavior. For the public, the research shows that generative AI does not offer an even map of the world: its answers reflect the biases embedded in the data it is built on.
More information
Francisco W. Kerche et al, The silicon gaze: A typology of biases and inequality in LLMs through the lens of place, Platforms and Society (2026). DOI: 10.1177/2976862425140891 journals.sagepub.com/doi/10.1177/29768624251408919
Citation: ChatGPT found to reflect and intensify existing global social disparities (2026, January 20) retrieved 20 January 2026 from https://phys.org/news/2026-01-chatgpt-amplifies-global-inequalities.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.