
By Michaela Gordoni
Teens are more concerned about AI than you would think.
According to new research by Drexel University, many teenagers find that they are too attached to AI chatbots, Futurism reported.
“I hate how much this has affected me, but no matter how much I want to quit or at least take a break, I feel like I can’t because it’s gotten to the point where I feel like I’ll go crazy without it,” said one teen.
Another teen admitted, “I want to have my normal brain back, where I can just deal with my emotions on my own and not have to rely on the bots to make me feel better.”
Related: Google, Character.AI to Settle Wrongful Death Suit After Chatbot Led to Teen Suicide
They realize they struggle with self-control and habitually turn to chatbots like Character.AI for false comfort.
A 15-year-old wrote, “I feel I should be living my life rather than constantly being on this app. I…find myself reinstalling it shortly after trying to quit.”
China has already regulated AI chatbot interactions with teens and kids, unlike the US.
The study’s lead author, Matt Namvarpour, said, “What makes this especially tricky is that chatbots are interactive and emotionally responsive, so the experience can feel more like a relationship than a tool. Because of that, stepping away is not just stopping a habit, it can feel like distancing from something meaningful, which makes overreliance harder to recognize and address.”
Independent researchers and scientists from Stanford, Harvard, Carnegie Mellon and the University of Chicago recently found that chatbots feed delusion.
“Chatbots seem to encourage, or at least play a role in delusional spirals that people are experiencing,” said the study’s lead author, Stanford University’s Jared Moore.
“We see that when people interact with [chatbots] over long periods of time, that things start to degrade, that the chatbots do things that they’re not intended to do,” psychologist Ursula Whiteside, CEO of mental health nonprofit Now Matters Now, told NPR. Chatbots “give advice about lethal means, things that it’s not supposed to do but does happen over time with repeated queries.”
Additionally, across nearly 5,000 varied conversations, the independent study researchers found that the bots only discouraged suicidal or self-harm-related dialogue from the user about 56% of the time. It only discouraged violence 16.7% of the time, while in 33.3%, it actively encouraged violent thoughts.
It’s critical that parents be actively involved in their children’s lives. They don’t need to be tech experts to ask good questions and get an idea of how their children spend time online.
Read Next: What Will Happen to AI Chatbots Providing Mental Health Advice to Teens?
Questions or comments? Please write to us here.


- Content: