This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org
In September on Capitol Hill, while sitting across from the most recent survivor parents who bravely testified on the horrors unleashed by chatbots resulting in their children’s deaths, I suddenly felt ancient and obsolete.
Five years ago, I lost my 16-year-old son, Carson, to suicide after he was viciously cyberbullied by his high school classmates over Snapchat’s anonymous app, Yolo. This dangerous product design allowed teens to hurl anonymous insults at a know…
This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org
In September on Capitol Hill, while sitting across from the most recent survivor parents who bravely testified on the horrors unleashed by chatbots resulting in their children’s deaths, I suddenly felt ancient and obsolete.
Five years ago, I lost my 16-year-old son, Carson, to suicide after he was viciously cyberbullied by his high school classmates over Snapchat’s anonymous app, Yolo. This dangerous product design allowed teens to hurl anonymous insults at a known teen victim.
I, too, testified before Congress in 2023 and strongly advocated for legislation to address this completely unregulated industry, which is harming our children. This marked the beginning of my harrowing journey – repeatedly telling the story of the worst day of my life to an endless list of staff members on Capitol Hill.

My story was told alongside other survivor parents whose children died of sextortion, fentanyl poisoning from drugs marketed, purchased and delivered over Snapchat, deadly online challenges and pro-suicide content fed to vulnerable children over Instagram, TikTok and YouTube.
Sometimes, staffers would shed sincere tears. Other times, they would glance down at their phones, visibly uncomfortable, even in the middle of our tragic tales. Many met us with blank, emotionless stares.
Social media and AI companies experiment on children with few legal checks

These trips were exhausting, but we remained focused on creating change. It appeared our efforts were paying off when the Senate overwhelmingly passed the Kids Online Safety Act (KOSA) in a 91-3 vote in July 2024.
Clear support existed in the House. Had Speaker Mike Johnson, R-Louisiana, brought it to the floor, it likely would have passed. I believe that the promise of a $10 billion artificial intelligence center in Louisiana, offered by Meta, kept KOSA from ever reaching the House floor – even though it could have saved children’s lives.
So here we are today with no comprehensive regulation for social media companies. We also still face the huge barrier of Section 230, the nearly 30-year-old law that prevents parents from suing social media and AI companies for the products that killed their children. This is the recipe for disaster – amplified now by the recent evidence of chatbot-linked suicides.
Just as tech companies have evolved their products, the dangers have evolved, too. I have started to think of the parent survivor movement in the same terms the technology industry uses to label their advancing products: Harms 1.0 and Harms 2.0.
Harms 1.0 families have endured the loss of a child due to addictive and dangerous design choices and algorithms intended to keep kids online longer so that their data can be collected and sold to advertisers for profit.
Harms 2.0 – products like ChatGPT and Character AI, rated ages 13+ and 18+ on Apple’s App Store – feature human-like interaction, encourage isolation from parents and can aid in teen suicide planning. Similarly, AI Chatbot companies use conversation to keep children engaged and to collect data to build large language models (LLMs), further advancing their products.
The harms of 2.0 persist because the harms of 1.0 were never addressed through the Kids Online Safety Act or Section 230 reform. In both instances, this model of engagement, whether through fear of missing out or sycophantic chatbot replies, contributes to the growing isolation, depression and anxiety of our teens while Big Tech profits.
The harms of social media were never addressed. Now it’s even more dangerous.

We can thank the first generation of social media products for exploiting teenage vulnerabilities, which paved the way for the rapid adoption of Character AI products. Nearly 20 years after the introduction of social media, 95% of teenagers use the platforms, with half using them almost constantly.
AI chatbots were widely introduced less than three years ago. Already, nearly 75% of teens use them, with more than half using them regularly, according to Common Sense Media.
When I came forward with Carson’s story and my lawsuit against Snapchat, Yolo and LMK four years ago, I felt alone. Soon, I found a group of parents who had experienced similar losses. My story was only the beginning. I believe this will also happen with the next generation of online harms. By going public with their stories, the chatbot survivor parents have courageously opened the door to many other families sharing their experiences.
Listening to chatbot survivor moms speaking in a senator’s office, I was struck by the similarities in our stories. We were all diligent parents – delaying phone use, monitoring our children’s screen time and using parental controls. Yet we still lost our children.
As parents, it is unconscionable to think that a company would experiment on children and view them as collateral damage in the name of profit. Before this happened to us, we were unaware that anonymous apps had been integrated into Snapchat (they’ve since been banned), that harmful challenges and pro-suicide content were being algorithmically fed to teens, or that drug dealers were using Snapchat to sell their illegal substances.
However, thanks to the voices of my many fellow Harms 1.0 survivor parents, the world now knows this is possible. Now, with Character AI and ChatGPT parents coming forward, the world also knows that an app recommended to children can encourage them to consider suicide and withdraw from the parents who could have saved them.
There are signs of progress. The recent introduction of the GUARD Act ‒ a bipartisan effort led by Sens. Josh Hawley, R-Montana, and Richard Blumenthal, D-Connecticut, that would ban AI chatbot companions for minors ‒ and the AI LEAD Act ‒ introduced by Sens. Dick Durbin, D-Illinois, and Hawley, which would allow AI companies to be sued for harms caused by their products ‒ shows that lawmakers across party lines recognize the need to hold tech companies accountable.

These bills, alongside the Kids Online Safety Act, reflect a growing consensus that protecting children online transcends politics.
Together, we will continue to push for tech accountability. We demand real legislation that protects kids from social media and AI. Congress, please act now, before parents face Harms 3.0: The Unimaginable.
Kristin Bride is a parent survivor, social media reform advocate and the executive director of the nonprofit organization The Carson J. Bride Effect. She is a founding member of Parents RISE!, the first grassroots parent survivor-led group advocating for tech accountability and kids’ online safety. She is a member of Issue One’s Council for Responsible Social Media.