In today’s rapidly evolving information ecosystem, people face significant challenges discerning reliable and trustworthy information. Social media has amplified these risks, with increased exposure to scams, fraud, low quality information, and false news content. This has also driven an increase in political polarization and a decrease in trust in high quality information. This trend disproportionately harms vulnerable groups, including older adults, adolescents, and diverse communities, who are often targeted but have fewer skills and resources to protect themselves.
Generative AI further intensifies these challenges. It can now create highly persuasive text, images, and voices at scale and deploy them through microtargeting, making it even more difficult to evaluate information qu…
In today’s rapidly evolving information ecosystem, people face significant challenges discerning reliable and trustworthy information. Social media has amplified these risks, with increased exposure to scams, fraud, low quality information, and false news content. This has also driven an increase in political polarization and a decrease in trust in high quality information. This trend disproportionately harms vulnerable groups, including older adults, adolescents, and diverse communities, who are often targeted but have fewer skills and resources to protect themselves.
Generative AI further intensifies these challenges. It can now create highly persuasive text, images, and voices at scale and deploy them through microtargeting, making it even more difficult to evaluate information quality, especially for vulnerable populations. While these technologies offer new opportunities, they may also amplify threats to information integrity, exacerbate societal inequalities on a significant scale, and introduce an unprecedented pace of AI development that outstrips society’s ability to develop the adaptive literacy skills necessary to navigate an increasingly complex and AI-driven information ecosystem.
With 2022 Stage 2 funding from Stanford Impact Labs, the Empowering Diverse Digital Citizens Lab — a collaboration between Stanford’s Social Media Lab and industry and nonprofit partners — tested digital media literacy interventions to help individuals make better-informed decisions, and help them protect themselves, their friends, families, and communities from falling prey to dubious or misleading claims.
In partnership with the American Library Association (ALA), Jigsaw, and the Poynter Institute’s MediaWise, the lab designed and tested interventions specifically for older adults and diverse communities to discern facts from fiction and build trust in credible news and information. We worked with community-based organizations to conduct listening sessions with communities of color to make sure our approach was responsive and inclusive. We developed and tested digital literacy interventions, including the Misinformation Resilience Toolkit for Libraries and tailored interventions for diverse communities. In 2024, we scaled our short digital literacy videos for older adults, reaching over 10 million older adults in the U.S. Field studies showed these short videos significantly improved digital literacy skills for older adults at a median cost of only $0.22 per person.
With 2025 Stage 2 funding from Stanford Impact Labs, the lab is expanding its current digital literacy interventions to build digital resiliency in an AI-transformed information ecosystem. We will develop, evaluate, and scale validated AI literacy interventions to mitigate AI harms and the unequal distribution of benefits across society. By enhancing AI literacy for a wide range of people, in particular older adults, adolescents, and diverse communities, we aim to support the purposeful and positive use of AI tools while also fostering agency, information integrity, equality, safety, and connection. All of this work will be conducted in collaboration with our established partnership network in addition to Common Sense Media and the News Literacy Project.