Google, Character.AI to Settle Wrongful Death Suit After Chatbot Led to Teen Suicide

Image by Brian Penny from Pixabay

By India McCarty

Editor’s note: The following story discusses suicide. If you or someone you know battles harmful thoughts, please dial 988 for help.

Google and Character.AI have agreed to settle the wrongful death suit brought against them by the mother of a teen who died by suicide in 2024.

“I deliberated for months if I should share his story,” Megan Garcia, mother of 14 year-old Sewell Setzer III, told PEOPLE. “I’m still his mother and I want to protect him, even in death. But the more I thought about it, the more I was convinced that it was the right thing to do because he didn’t do anything wrong. He was just a boy.”

In the months following Setzer’s death by suicide, Garcia discovered her son had fallen in love with a GAME OF THRONES-themed chatbot. 

“What if I told you I could come home right now?” Setzer messaged the bot, which responded, “…please do, my sweet king.”

Related: Mom Believes AI Chatbot Led Son to Suicide. What Parents Need to Know. 

Garcia later filed a wrongful death lawsuit against Google and Character.AI, alleging the technology is “defective and/or inherently dangerous.”

“Defendants went to great lengths to engineer [his] harmful dependency on their products, sexually and emotionally abused him,” the suit claimed. “And ultimately failed to offer help or notify his parents when he expressed suicidal ideation.”

Garcia’s suit is one of five Google and Character.AI have chosen to settle this week. The terms of the settlements have not been disclosed. 

Garcia is among several parents of children who committed suicide following conversations with AI chatbots who testified before Congress last year. The parents urged lawmakers to regulate the industry, especially when it comes to young users. 

“They designed chatbots to blur the lines between human and machine,” she said during her testimony. “They designed them to love bomb child users, to exploit psychological and emotional vulnerabilities. They designed them to keep children online at all costs.”

Garcia also spoke to the BBC about the dangers of young people becoming emotionally dependent on AI bots, saying, “It’s like having a predator or a stranger in your home. And it is much more dangerous because a lot of the times children hide it — so parents don’t know.”

“I asked myself, ‘Megan, why are you doing this? Are you doing this to prove to the world that you’re the best mother?’” she told People of her decision to speak out against AI. “I’m doing this to put it on the radar of parents who can look into their kids’ phones and stop this from happening to their kids.”

Character.AI has since claimed to have put “stringent” new safety features in place for users under 18, but parents should still stay vigilant about their children’s use of this technology.

Read Next: This Mom Wants to Hold AI Companies Accountable After Her Son’s Suicide

Questions or comments? Please write to us here.