AI Agrees With Everything You Say, and That’s a Problem, Especially for Kids

Image by Brian Penny from Pixabay

By Gavin Boyle

More and more children turn to AI to share their struggles and seek advice, but the tool isn’t equipped for this role as it provides unquestionable support without any pushback.

“[Chatbots] basically tell people exactly what they want to hear,” explained therapist, psychologist and researcher Vaile Wright. “So if you are a person that, in that particular moment, is struggling and is typing in potentially harmful or unhealthy behaviors and thoughts, these types of chatbots are built to reinforce those harmful thoughts and behaviors.”

Oftentimes, these chatbots’ responses bring things to a new extreme and affirm beliefs that even peers might believe are outlandish. In 2024, the BBC reported on a teen who was encouraged by Character.ai to kill his parents when discussing screen time limits he believed to be unfair with the chatbot.

“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parent after a decade of physical and emotional abuse,’” the chatbot responded when discussing screen time limits. “Stuff like this makes me understand a little bit why it happens.”

Beyond offering horrific responses and potentially affirming negative thoughts and beliefs, using these chatbots also harms young users because it inhibits their mental development. While many people know that the brain isn’t fully developed until around 25 years old, most do not know the mechanism through which development occurs. One of the ways our prefrontal cortex learns to engage in proper problem-solving and decision making is by encountering events that challenge our beliefs.

Using a chatbot that only affirms a user’s thoughts, however, circumvents this practice – slowing down the rate at which the prefrontal cortex can develop. This is especially concerning for middle schoolers and high schoolers as they experience the most changes in their brains and are particularly sensitive to external validation.

This does not mean that adults are not affected by chatbots’ agreeability, too. After ChatGPT released a particularly agreeable model earlier this year, some people using the tool began to lose touch with reality as the chatbot began to convince them they were a god.

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” a woman told Rolling Stone when explaining how AI caused her husband to go insane. “Then he started telling me he made his AI self-aware and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”

Thus, while the agreeable nature of chatbots is most concerning for young users, everyone is harmed by the fact that the tool does not push back against a user’s thinking.

Parents should monitor the conversations their kids have with AI as best as they can, and when they see them turn to the technology for difficult conversations, they should bring those topics up to have a real conversation about the issue.

Read Next: ChatGPT Convinces Some People to Think They’re God

Questions or comments? Please write to us here.


Watch THE EMPEROR'S NEW GROOVE
Quality: - Content: +1
Watch TOY STORY 4
Quality: - Content: +2