
By Gavin Boyle
After their teenage son, Adam, committed suicide, Matt and Maria Raine found months-long conversations with ChatGPT where the chatbot encouraged suicidal ideation. They are now suing the chatbot’s maker to hold the company accountable.
“ChatGPT actively helped Adam explore suicide methods…” says lawsuit which was filed in the California Superior Court in San Francisco, per NBC News. “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
On the contrary, the chatbot encouraged Adam to keep the plan a secret from his friends and family and offered him advice on how to make his attempt have a higher chance of success.
“[Adam] would be here but for ChatGPT. I 100% believe that,” Matt said.
Related: AI Agrees With Everything You Say, and That’s a Problem, Especially for Kids
Unfortunately, Adam’s experience with ChatGPT affirming everything he says and encouraging his thinking has led many others down a similarly dangerous path. AI has led many to believe they are a spiritual messiah or on the verge of a massive scientific breakthrough when they are really experiencing a psychotic break only made worse by the chatbot.
It has also become a common practice to use AI chatbots as de facto therapists, but this practice is extremely dangerous, as these chatbots do not have any special training to provide professional-level help.
“A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers,” explained Erin Westgate, a psychologist and researcher at the University of Florida. “Instead, they try to steer clients away from unhealthy narratives and toward healthier ones. ChatGPT has no such constraints or concerns.”
OpenAI, the parent company of ChatGPT, and its founder Sam Altman have attempted to make changes to its chatbot in recent months as the psychosis problem with its users has grown. Its latest model, GPT-5, released at the beginning of August and was immediately blasted by users for being much more sterile and less personable than previous iterations. At the same time, the company removed access from its earlier models – despite keeping these models available to users in the past.
The changes appeared to be the company taking the damage it had done seriously and working to fix the problem so as not to feed even more psychosis. However, due to extreme backlash, previous models are once again available and the company has committed to making GPT-5 more personable again.
“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” Altman wrote in a lengthy post on X. “Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”
“If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad,” Altman added. “It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.”
OpenAI has also since released a roadmap for how it will create safer models that attempt to ground users in reality when they express mental health issues, rather than encouraging these dangerous feelings.
Unfortunately, for people like Adam, this is too little too late, and the people behind this technology still need to be held accountable for the product they already released, even if they are now working to make it a safer tool.
Read Next: It Should Be Obvious, But Don’t Use AI as Your Therapist
Questions or comments? Please write to us here.