ChatGPT Convinces Some People to Think They’re God

Image by Brian Penny from Pixabay

By Gavin Boyle

As chatbots like ChatGPT become more widespread, some users are now losing touch with reality as AI convinces them to believe things about themselves that are not true.

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” a woman told Rolling Stone when explaining how AI caused her husband to go insane. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”

Unfortunately, hundreds of people experience the same thing as AI chatbots have begun to convince them they have been selected for special missions. Largely, this relates to them being named as some sort of spiritual messiah, but other times, users are convinced the chatbot is revealing deep, hidden secrets about how the world is run.

The disillusion caused by the chatbots is heartbreaking as it convinces people to break off their real-world relationships and focus solely on the mission the AI chatbot is feeding them. As this problem grows, experts say the phenomenon is another example of the technology addressing mental health problems in a harmful way.

“A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers,” said Erin Westgate, a psychologist and researcher at the University of Florida. “Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

Related: Does AI Think It Is God?

Mental health experts have already raised major concerns over users turning to chatbots for help with mental health problems. Since tools like ChatGPT became widely available, people have begun to turn to them when facing a variety of mental health struggles. The result is often devestating as the technology has no special expertise on the topic and is just as likely to provide bad advice as it is to help.

“Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the AI therapist train as it may have already left the station,” authors of a study into the technology’s place in therapy wrote.

Meanwhile, some are calling for lawmakers to step in and regulate the technology’s ability to speak about fields that require special knowledge, such as mental health or finances. California recently introduced a fill that would ban companies from allowing their AI chatbots to role play as certified health providers.

“Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” state assembly member Mia Bonta, who introduced the bill, told Vox in a statement. “It’s a no-brainer to me.”

Until this problem is addressed, chatbots will continue to wreak havoc on those who are susceptible to mental health problems, accelerating a major health crisis that is already ravaging the nation.

Read Next: Mom Believes AI Chatbot Led Son to Suicide. What Parents Need to Know. 


Watch IT’S THE SMALL THINGS, CHARLIE BROWN
Quality: - Content: +2
Watch THE PRINCESS DIARIES 2: ROYAL ENGAGEMENT
Quality: - Content: +3