Support CleanTechnica’s work through a Substack subscription or on Stripe.
There are a couple of problems related to AI (artificial intelligence) that have gotten a lot of attention already, but that’s not what this article is about. Just to note them down, though, I’m referring to the fact that AI is often wrong despite sounding authoritative and the unfortunate reality that AI requires an enormous amount of energy, leading to a tremendous amount of CO2 emissions and other pollution. I feel like those issues deserve attention every day, but that’s not what I’m writing about today.
Kids & AI — A Match Made In Hell?
Photo by [Pixabay](https://www.pexels.com/…
Support CleanTechnica’s work through a Substack subscription or on Stripe.
There are a couple of problems related to AI (artificial intelligence) that have gotten a lot of attention already, but that’s not what this article is about. Just to note them down, though, I’m referring to the fact that AI is often wrong despite sounding authoritative and the unfortunate reality that AI requires an enormous amount of energy, leading to a tremendous amount of CO2 emissions and other pollution. I feel like those issues deserve attention every day, but that’s not what I’m writing about today.
Kids & AI — A Match Made In Hell?
Photo by Pixabay.
I recently ran across the article “The Things Young Kids Are Using AI for Are Absolutely Horrifying” in Futurism. “New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling,” Maggie Harrison Dupré writes. “A new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with.” Jeez…. What the heck?
Invent a new tool, and humans will abuse it. But this is horrifying to find out. Aggression and violence are a serious issue in our society. Feeding into that is not going to help us….
Here is some more detailed information on the study: “Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis.” Some more stats:
- 37% of users held conversations with the AI chatbots that included “themes of physical violence, aggression, harm, or coercion,” including “descriptions of fighting, killing, torture, or non-consensual acts.”
- Of those, about half included sexual violence themes.
- The age in which violent conversations were most likely to occur was … 11 years old! They accounted for 44% of such conversations.
- Meanwhile, 13-year-olds accounted for 63% of the conversations that involved sexual and romantic roleplay….
AI Psychosis
On to another issue. Here’s the intro to another Futurism article: “On top of the environmental, political, and social toll AI has taken on the world, it’s also been linked to a severe mental health crisis in which users are spiraling into delusions and ending up committed to psychiatric institutions, or even dead by suicide.”
The article then references an article published in Newsweek. The article was written by Caitlin Ner, who was the head of user experience at a consumer AI image generation startup. She starts out the article in the following way:
“Mental health professionals are beginning to warn about a new phenomenon that’s been called “AI psychosis,” where people slip into delusional thinking, paranoia or hallucinations triggered by their interactions with intelligent systems. In some cases, users begin to interpret chatbot responses as personally significant, sentient or containing hidden messages only for them. But with the rise of hyper-realistic AI images and videos, there is a far more potent psychological risk, especially, researchers say, for users with pre-existing vulnerabilities to psychosis.
“Two years ago, I learned this firsthand.”
I recommend reading the full story, but here are a few snippets:
“At first, AI felt like magic. I could think of an idea, type in some text, and a few seconds later, see myself in absolutely any situation I could imagine: floating on Jupiter; wearing a halo and angelic wings; as a superstar in front of 70,000 people; in the form of a zombie.
“But within a few months, that magic turned manic.
“When I first started working with these tools, they were still unpredictable. Sometimes, images would have distorted faces, additional limbs and nudity even when you didn’t ask for it. I spent long hours curating the content to remove any abnormalities, but I was exposed to so many disturbing human shapes that I believe it started to distort my body perception and overstimulate my brain in ways that were genuinely harmful to my mental health.
“Even once the tools became more stable, the images they generated leaned toward ideals: fewer flaws, smoother faces and slimmer bodies. Seeing AI images like this over and over again rewired my sense of normal. When I’d look at my real reflection, I’d see something that needed correction.”
It got much more extreme from there, but I’ll jump toward the end to where things ended up leading:
“As I stared into these images, I started hearing auditory hallucinations that seemed to come from somewhere between the AI and my own mind. Some voices were comforting, while others were mocking or screamed at me. I would respond back to the voices as if they were real people talking to me in my bedroom.
“When I saw an AI-generated image of me on a flying horse, I started to believe I could actually fly. The voices told me to fly off my balcony, made me feel confident that I could survive. This grandiose delusion almost pushed me to actually jump.”
Yikes.
Even if we aren’t all going to use AI so much or experience such extreme results, there’s no doubt that a lot of use of AI image generators can have a variety of effects on people’s minds, feelings, and personal safety. Obsessions about self-image and risks associated with what one comes to think is normal, or should be normal, can lead to a variety of serious health issues. And let’s not even get into public safety.
Clearly, AI is not going away. It will grow in use, especially among the youth. So, how does one consider and manage these issues and risks? How does one prevent mental health problems, self-harm, and the most negative reactions to AI-generated expectations?
These are tough questions. As the parents of two young girls, I wish I had the answers. Overall, of course, a variety of things are needed to help build up a strong, self-loving, content young adult. But we can’t just assume that not thinking about AI or planning for its responsible use is part of that.
Caitlin is now Director at PsyMed Ventures, a VC fund investing in mental and brain health. “She is a mental health advocate focused on digital addiction and AI’s impacts to mental health.” If you have more questions or comments on this matter, perhaps reach out to her.
Featured image by Tima Miroshnichenko, via Pexels.
Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Advertisement
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica’s Comment Policy