Tilly Norwood, a 100 percent AI-generated actress, made her debut during the Zurich Film Festival. She’s currently ruffling feathers on the red carpet. As a child and adolescent clinical psychologist, I don’t need a crystal ball to predict where concerns will swell next.
Recent deaths by suicide and behavioral health crises have spotlighted American teens and their use of generic AI chatbots for mental health. To be clear, neither Tilly nor her designers is claiming that she is there to offer mental health support. But that is true of all ge…
Tilly Norwood, a 100 percent AI-generated actress, made her debut during the Zurich Film Festival. She’s currently ruffling feathers on the red carpet. As a child and adolescent clinical psychologist, I don’t need a crystal ball to predict where concerns will swell next.
Recent deaths by suicide and behavioral health crises have spotlighted American teens and their use of generic AI chatbots for mental health. To be clear, neither Tilly nor her designers is claiming that she is there to offer mental health support. But that is true of all generic AI chatbots: They were neither designed nor intended for mental health treatment, yet these technologies are being used for that very purpose.
Regulating this space and building a clinical evidence base for digital health technologies that are safe and effective for use in mental health treatment will take time, money, and a complex alignment of scientific, regulatory, legislative, consumer, and technology stakeholders. But as Tilly’s debut reminds us, technology is sprinting ahead at a flat-out pace and will not wait. While the wheels of government grind on (or don’t grind on), adults who care for youth can and should be acting to reduce harm for teens.
They should start with understanding why teens are turning to AI for their mental health needs. Data regarding the risks of substituting a real therapist with an algorithm is widely circulating. However, many teens slide under the caution tape and engage with chatbots looking for more than casual conversation. Data suggest several reasons for this that extend beyond chatbots’ persuasive (and commercially driven) algorithmic design, including mental health system failures, concerns about privacy, and stigma.
According to data from the 2024 SAMHSA National Survey on drug use and mental health, more than 70 percent of youth reported they did not receive mental health treatment when they needed it due to concerns about what people might think, and 65 percent worried that the information they shared with a health professional would not be kept private. Staggeringly, over 90 percent of teens did not receive care because they believed they should be able to handle their mental health and emotions on their own.
As we train teens to become digital citizens and critical consumers of social media and technology, we should guard against inadvertently perpetuating their feelings of shame around seeking mental health support. Normalize mental health as core to overall health and explain—and demonstrate—that asking for help is encouraged and even signals strength.
We can also encourage teens to lean into the power of human connection even outside the context of formal mental health services. In their national survey of teens ages 12 to 17, Common Sense Media found that nearly 70 percent of teens found AI conversations less satisfying than human conversations.
Make yourself available. No mental health license is required for a well-intentioned adult to get curious with teens, invite them to ask questions, or share their thoughts about their digital experiences. These subtle interventions may not be as flashy as technology, but they can help teens develop reliable and trustworthy connections with adults. These connections may make a teen more willing to disclose a mental health concern while also allowing adults to monitor for any emerging challenges so they can determine if formal support is needed. Research has repeatedly demonstrated that supportive relationships with trusted adults are protective and promote youth well-being and adjustment. Connection can assure youth—even those who value their independence—that they do not have to manage their mental well-being alone.
It will take coordinated action to protect teens from the harmful consequences of using unregulated, generic AI chatbots in place of the evidence-based supports and interventions they need. Some of this work is already underway. September alone saw health care innovation leaders from the American Psychological Association testify before the U.S. House of Representatives, federal lawmakers introduce the Children Harmed by AI Technology Act, and the FTC launch an investigation into seven companies with consumer-facing AI chatbots to determine how, if at all, they are monitoring and mitigating the negative outcomes for children and adolescents using these technologies.
This is progress. But the time that these efforts take can feel painfully slow when the well-being of children and teens you care for may be at stake. You don’t have to wait. There are strategies within reach to reduce harm and you can start today.