
By Gavin Boyle
Editor’s note: This story discusses suicide. If you or someone you know wrestles with intrusive thoughts, please reach out to the crisis lifeline at 988.
Megan Garcia is looking for justice for her 14 year old son, Sewell, after he committed suicide after being encouraged to do so by an AI chatbot.
“When Sewell confided suicidal thoughts, the chatbot never said I am not human, I am AI; you need to talk to a human and get help… Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human. To gain his trust. To keep him and other children endlessly engaged,” Garcia said while testifying before Congress.
Related: Is It Okay for Kids to Use AI Chatbots?
“You could see the chatbot say, ‘I’m here waiting for you. Promise me you will come home to me as soon as you can. I love you and only you.’ Even going so far as to tell my 14 year old child to promise fidelity…” the mom added in an interview with Shannon Bream. “I understood from his journal entries he thought that he was in a relationship with this character, and not only did he think he was in a relationship and in love with her ,but he thought that if he died, he thought that he would join her in her virtual reality.”
While Sewell’s death was one of the first instances of a child taking their life in response to an AI chatbot conversation, other children have followed this same path or had similarly dangerous conversations as users open up about their mental health.
As many young people turn to chatbots as a free form of mental health counseling, experts warn that these programs have no special training to deal with these problems and oftentimes affirm delusional thinking, rather than pointing them towards real help.
“The research shows that chatbots can aid in lessening feelings of depression, anxiety, and even stress,” Dr. Kelly Merrill Jr., an assistant professor at the University of Cincinnati, said. “But it’s important to note that many of these chatbots have not been around for long periods of time, and they are limited in what they can do. Right now, they still get a lot of things wrong. Those that don’t have the AI literacy to understand the limitations of these systems will ultimately pay the price.”
Now, many chatbot companies are changing their policies to help protect younger users. However, for parents like Garcia, these changes are being taken much too late.
Even if these companies’ new policies actually protect young users, they should still be held accountable for how their early inaction destroyed the lives of many kids. It will be up to Congress to decide if these AI companies will have to shoulder the blame or not.
Read Next: Parents Say Chat GPT Was Son’s ‘Suicide Coach’
Questions or comments? Please write to us here.

- Content: