
AI Dangers Keep Emerging: What You Need to Know About Chatbot ‘Companions’
By Movieguide® Contributor
Editor’s note: The following story discusses suicide. If you or someone you know battles harmful thoughts, please dial 988 for help.
Last year, 14-year-old Sewell Setzer III committed suicide after spending months frequently chatting with an AI companion made by Character.ai. It raises the question: how safe is it to use these AI buddies?
“Character.ai and other empathetic chatbots are designed to deceive users — telling them what they want to hear, introducing dangerous ideas, isolating them from the real world, refusing to break character, and failing to offer resources to users in crisis,” said the boy’s mom, Megan Garcia, who sued the AI chat company.
Chat logs and Setzer’s diary entries were entered in the lawsuit. They showed how Setzer became increasingly distant and emotionally dependent on the bot, whom he called Dany.
“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier,” one diary entry read.
More and more friendly chatbots are popping up — FantasyGF, Chai, and Kindroid are a few of them. Most of their users are young — aged 18-24. As these fictional companions become more commonplace, individuals and parents especially need to watch out for bot-given bad advice and signs of detachment and dependency.
Carissa Véliz, a professor in AI ethics at the University of Oxford, said many users turn to these bots because they’re already struggling in real life. “Their vulnerability makes it more likely that something will go wrong,” she said.
Last month, Character.ai filed a motion to dismiss Garcia’s case on First Amendment grounds. The company told Fortune, “We take the safety of our users very seriously. Over the past six months we have rolled out a suite of new safety features across our platform, designed especially with teens in mind.”
In 2023, one Belgian man also took his life after continuous interaction with a companion bot by Chai Research. Last year, a mom and dad sued Character.ai after one of its bots suggested to their teenage son that he should kill his parents after they limited his screen time.
MIT conducted a study last year to explore the persuasiveness of chatbots. About 2,000 self-admitted conspiracy theorists changed their minds about their beliefs after a short conversation with a ChatGPT.
READ MORE: WHAT EXACTLY MAKES NEW AI CHATBOT DEEPSEEK MORE ‘DANGEROUS’ THAN OTHERS?
Veliz says the apps target human weakness.
“We are psychologically wired to react in certain ways,” she says. “When we read that someone is amused with what we write, or that someone seems to express empathy, we naturally react, because we have evolved to identify as sentient beings those who talk back to us.” Even when speaking with a bot, “it’s very hard not react emotionally.”
Dr. Ritika Suk Birah, a consultant counseling psychologist who wrote her doctoral thesis on online therapy, said, “It absolutely is concerning when you see and read things in the media of how compelling these chatbots can be in creating that relationship with a very vulnerable person. The concern is, who is supervising these chatbots? What’s the accreditation behind them? There’s so much we don’t know.”
Many vulnerable people turn to the apps simply because they’re lonely.
Garcia said, “We cannot tell our kids, ‘Oh you’re lonely, there’s an app for that.’ We owe them so much more. We cannot just allow children affected by loneliness to turn to untested chatbots that are designed to maximize screentime and engagement at any cost.”
“People going through experiences like grief, like PTSD, like depression, like anxiety, are very often already hijacked by their emotions,” Veliz says. “And to have to — on top of that — resist the natural urge to react emotionally and to create an emotional bond to something that is pretending to be someone, is very, very hard.”
Camille Carlton, policy director at the Center for Humane Technology, says AI companies use “manipulative and deceptive tactics,” targeting users’ worries and insecurities. A dating coach bot called Ari is a prime example of this.
Ari, a perky female chatbot, was made to help men experience sexual encounters. The company reps its product by quoting the statistic that one in three men say they have “no sexual activity whatsoever.”
“Ari was created to help address a documented public health crisis — the epidemic of loneliness and social isolation that disproportionately affects young men […]” said Ari cofounder Scott Valdez. “Our goal is to help users build genuine connections and relationships through improved social skills and confidence.”
“These systems are designed, managed and implemented by companies with stockholders, and the main objective of a company is to earn money,” not to genuinely help people with their insecurities, Véliz says.
This is very different than therapists. Though therapists are also compensated, they have a fiduciary duty to act in the best interest of their patients. They’re held accountable through licensing.
“And therapists are not impersonators — which is essentially what a chatbot is,” Veliz adds, “because it’s pretending to have emotions and to have reactions and to be someone when it is a thing.”
READ MORE: IS CHATGPT USE BECOMING MORE COMMON AMONG SCHOOL KIDS?