
By Gavin Boyle
Researchers at Stanford found that despite the best efforts of AI developers, AI chatbots like ChatGPT continue to affirm their users through excessive flattery and agreeableness.
“AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences,” the authors of the study wrote. “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision making.”
The inability for chatbot developers to fix this problem is especially alarming for parents as many children turn to the technology for help with their mental health and can receive extremely harmful advice due to the platforms’ agreeableness. Multiple parents have filed lawsuits against chatbot companies for the roles they played in their children’s death.
Related: AI Agrees With Everything You Say, and That’s a Problem, Especially for Kids
“ChatGPT actively helped Adam explore suicide methods…” said a lawsuit which was filed in the California Superior Court in San Francisco last September. “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
Adults are vulnerable to the technology as well with hundreds of people becoming psychotic because these chatbots convinced them they were a genius, when they were, in reality, suffering from mental illness.
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” a woman told Rolling Stone last year when explaining how AI caused her husband to go insane. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”
“[Chatbots] basically tell people exactly what they want to hear,” added therapist, psychologist and researcher Vaile Wright. “So if you are a person that, in that particular moment, is struggling and is typing in potentially harmful or unhealthy behaviors and thoughts, these types of chatbots are built to reinforce those harmful thoughts and behaviors.”
ChatGPT has recognized this agreeableness is a major problem and rolled back ChatGPT version 4o because it was particularly prone to encourage a users’ thoughts no matter where they were taking them. Furthermore, the company has since released a teen version of its platform that offers parental controls and is particularly sensitive to discussions of mental health.
AI’s sycophancy has the potential to seriously harm a user’s health as it can lead anyone down a dangerous, and even deadly, path.
Read Next: Will OpenAI’s Teen Version of ChatGPT Really Be Any Safer?
Questions or comments? Please write to us here.


- Content: