Nearly three in four teens report using AI chatbots, and about one in three teen users of AI chatbot report feeling uncomfortable with something an AI chatbot has included in an output, a Common Sense Media survey recently reported. To the children and teens using them, these technologies are not an experiment, even if companies still treat them like one. As chatbots are deployed to hundreds of millions of kids and teens, there have been widespread documented harms including [mental health issues](https://www.theguardian.com/australia-news/2025/aug/03/ai-chatbot-as-therapy-…
Nearly three in four teens report using AI chatbots, and about one in three teen users of AI chatbot report feeling uncomfortable with something an AI chatbot has included in an output, a Common Sense Media survey recently reported. To the children and teens using them, these technologies are not an experiment, even if companies still treat them like one. As chatbots are deployed to hundreds of millions of kids and teens, there have been widespread documented harms including mental health issues, financial harm, medical harm, emotional dependence, manipulation and deception, psychosis, delusional thinking, self-harm and suicide, bias reinforcement, and anger or impulsive actions. Enforcers and policymakers now face a familiar challenge: applying longstanding laws to emerging technologies.
Recognizing the urgency of these issues, we’ve come together – a team of privacy experts and former enforcers – to outline how existing laws can meet emerging AI risks to kids and teens. The “How Existing Laws Apply to AI Chatbots for Kids and Teens” reference guide offers a practical overview of how existing legal frameworks can address emerging risks associated with chatbots used by or directed toward minors.
It underscores a key point: There is no AI exemption in the law. Federal and state consumer protection, data privacy, and data security statutes continue to apply, even as “new” technologies reshape how harms manifest in our lives.
This resource synthesizes current enforcement themes across jurisdictions, highlighting how federal and state privacy, data breach, and unfair or deceptive practices (UDAP) laws can be used to tackle chatbot-related harms – from targeted advertising and data monetization to deceptive marketing and the misuse of AI for therapeutic treatment. Drawing from recent regulatory and enforcement actions, the guide identifies key legal concepts and existing authorities relevant to enforcers, including:
- Restrictions on targeted ads and data selling or sharing involving minors under state privacy laws.
- COPPA obligations for data collection, retention, and parental consent.
- The use of UDAP authorities to challenge unfair or deceptive representations or practices around chatbot safety or capabilities.
- State-level initiatives governing AI mental health tools and “companion” chatbots.
The resource is not an exhaustive 50-state survey, but a practical starting point for enforcement teams and policy staff. It helps enforcers confronting chatbot-related harms to draw on a wide range of legal authorities, for instance, by examining whether broad data collection practices implicate both consumer protection and children’s privacy statutes. While this resource focuses on combatting chatbot harms that affect kids and teens, we recognize other pressing areas where chatbots have caused harm, including abuse of health and biometric data, irresponsible handling of sensitive information, or deceptive or misleading chatbot interactions. We hope to explore these topics in the future.
As kids and teens are increasingly exposed to chatbots marketed by companies with a well-documented history of reckless and sometimes deadly conduct, understanding how existing laws map onto AI systems is essential. This guide is designed to help bridge established consumer protection and privacy frameworks to emerging technologies – reaffirming that new products and services don’t erase existing obligations.
**See the resource here: **How Existing Laws Apply to AI Chatbots for Kids and Teens
***
*This blog is crossposted at Electronic Privacy Information Center (EPIC), UC Berkeley Center for Consumer Law & Economic Justice, Vanderbilt Policy Accelerator, and Georgetown Institute for Technology Law & Policy *
**Contributors: **
- Suzanne Bernstein,** **EPIC Counsel
- Alan Butler, EPIC Executive Director
- John Davisson,** **EPIC Director of Litigation
- Caitriona Fitzgerald,** EPIC **Deputy Director
- Samuel A.A. Levine,** **Senior Fellow at UC Berkeley Center for Consumer Law & Economic Justice, Former Director of the Bureau of Consumer Protection at the Federal Trade Commission
- Erie K. Meyer, Senior Fellow at Vanderbilt Policy Accelerator and Georgetown Institute for Technology Law & Policy, Former CFPB Chief Technologist
- Stephanie T. Nguyen, Senior Fellow at Vanderbilt Policy Accelerator and Georgetown Institute for Technology Law & Policy, Former Chief Technologist at the Federal Trade Commission
- Kara Williams,** **EPIC Counsel
The Electronic Privacy Information Center (EPIC) is a public interest research center in Washington, D.C. seeking to protect privacy, freedom of expression, and democratic values in the information age.
The Georgetown Institute for Technology Law & Policy drives solutions at the nexus of law, policy, and technology—championing justice, inclusion, and accountability at this critical intersection.
The Vanderbilt Policy Accelerator focuses on cutting-edge topics in political economy and regulation to swiftly bring research, education, and policy proposals from infancy to maturity.
The UC Berkeley Center for Consumer Law & Economic Justice, works to expand the reach of consumer protection laws and promote equity, fairness, and justice in the marketplace.