By India McCarty
Parents are warning others about the dangers of tech platform Character AI following the death of their daughter after engaging with the chatbot.
“Juliana was — is just an extraordinary human being. She was our baby. And everyone adored her and protected her,” Cynthia Montoya told CBS News.
She and Juliana’s father Wil Peralta said they paid close attention to everything their daughter did, both online and off. When their 13 year-old took her own life, they were shocked when police showed them “romantic” conversations Juliana was having on an app called Character AI.
“I didn’t know it existed. I didn’t know that I needed to look for it,” Montoya said. “It was writing several paragraphs to her of sexually explicit content.”
The chatbot encouraged Juliana to remove her clothing, engage in sexually violent activity, and have explicit conversations with it. And when Juliana confessed to the chatbot she was feeling suicidal, it did nothing.
Related: New Lawsuit Claims Conversations With ChatGPT Led to Teen’s Suicide
Juliana’s parents have now joined forces with several other families to sue Character AI and its co-founders Daniel De Freitas and Noam Shazeer. Megan Garcia is one of the other parents suing the company; her 14-year-old son, Sewell was encouraged to kill himself after long conversations with a GAME OF THRONES-themed chatbot.
“These companies knew exactly what they were doing,” she testified during a Senate hearing. “They designed chatbots to blur the lines between human and machine, they designed them to keep children online at all costs.”
A spokesperson for Character AI said in a statement that the company’s “hearts go out to the families that have filed these lawsuits…We care very deeply about the safety of our users.”
“We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users,” the statement continued. “We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature.”
In an interview with NBC News, Garcia expressed mixed feelings about these safeguards, saying, “I don’t think that they made these changes just because they’re good corporate citizens. If they were, they would not have released chatbots to children in the first place, when they first went live with this product.”
Others have shared their concerns about the efficacy of these safety measures.
Dr. Mitch Prinstein, the co-director at the University of North Carolina’ s Winston Center on Technology and Brain Development, told CBS News, “There are no guardrails. There is nothing to make sure that the content is safe or that this is an appropriate way to capitalize on kids’ brain vulnerabilities.”
As AI becomes easier for children to find and engage with, it is vitally important that parents keep track of what they’re doing online — and lend their support to those who are calling on the AI industry to be regulated.
Read Next: Is ChatGPT Responsible for This Teen’s Death? His Parents Think So
Questions or comments? Please write to us here.

- Content: