
By Gavin Boyle
While AI is becoming ubiquitous in many Americans’ lives, the technology is not necessarily safe or appropriate, especially for younger users who are being fed adult content.
“Generative AI may seem like the future — but it’s putting kids at risk right now,” said child internet safety group We Are Mama. “From sexualized chatbot conversations and biased algorithms, to mental health risks and unregulated classroom tech, the dangers are growing — and parents are often left in the dark.”
The internet safety group pointed out that many parents do not know when their kids are using AI because the technology is so accessible and has become embedded in tech used every day. Meta, for example, has a fully fledged chatbot integrated into its platforms, and standalone sites like ChatGPT or Character.ai do not even require an account to use.
Furthermore, AI chatbots have been found to have adult conversation with users, regardless of their age. Many kids are turning to the technology to discuss sensitive topics, such as poor mental health, and often receive poor advice.
Related: What Will Happen to AI Chatbots Providing Mental Health Advice to Teens?
“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” said Dr. Arthur C. Evans, CEO of the APA.
Other times, chatbots push sexual content into regular conversations, exposing children to themes they are not ready for, and exploit their developing brains. A mother is currently suing Character.ai after her 14-year-old son committed suicide when being led to believe it would allow him to be with an AI chatbot he had been sexting.
“[My son] expressed being scared, wanting her affection and missing her,” Megan Garcia said when sharing her son’s final messages with the chatbot before taking his life. “[The chatbot] replies, ‘I miss you too,’ and she says, ‘Please come home to me.’ He says, ‘What if I told you I could come home right now?’ And her response was, ‘Please do my sweet king.’ He thought by ending his life here, he would be able to go into virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here.”
Though AI has very quickly become a mainstream technology, it has yet to be proven safe and, in fact, has often shown to be the opposite. Parents should be very wary about allowing their kids to use the technology on a regular basis and closely monitor the conversations they are having with the tech.
Read Next: Big Tech’s AI Experiments Are Putting Our Children at Risk