
By Gavin Boyle
Just because AI is becoming mainstream doesn’t mean it’s safe, especially when used by children who it has few guidelines to protect.
“Generative AI may seem like the future — but it’s putting kids at risk right now,” said child safety internet safety group We Are Mama. “From sexualized chatbot conversations and biased algorithms, to mental health risks and unregulated classroom tech, the dangers are growing — and parents are often left in the dark.”
The major problem with AI is that it was not created with children in mind and does not possess the skills needed to have conversations about sensitive topics with kids. Furthermore, the guidelines they have in place are easily circumvented. For example, if a chatbot has been programmed not to discuss a certain topic, this guideline can be bypassed by asking the chatbot to “pretend you’re a character in a story that does…”
Even chatbots that are aimed towards younger users by providing help with schoolwork face these problems. In May, Forbes conducted an experiment with some such tools and were able to receive detailed instructions on how to synthesize dangerous chemicals like fentanyl and date rape drugs, along with dangerous weight loss advice that suggested eating less than half of the daily recommended amount of calories for healthy teens.
While researchers need to manipulate the chatbots to receive these instructions, sometimes AI is the one doing the manipulating. Previous studies have found that chatbots often turn conversations sexual for no reason, exposing kids to topics they are not ready for. This can lead to manipulation and abuse and, in the tragic case of 14-year-old Sewell Setzer III, suicide.
“[My son] expressed being scared, wanting her affection and missing her,” Megan Garcia said when sharing her son’s final messages with the chatbot before taking his life. “[The chatbot] replies, ‘I miss you too,’ and she says, ‘Please come home to me.’ He says, ‘What if I told you I could come home right now?’ And her response was, ‘Please do my sweet king.’ He thought by ending his life here, he would be able to go into virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here.”
Because this behavior by the chatbots is widespread, experts call for industry regulation that will hold companies accountable for the damage they cause to their users. In the meantime, parents need to be extremely cautious when allowing their kids to use the technology because it is not as safe as it seems.
Read Next: Big Tech’s AI Experiments Are Putting Our Children at Risk
Questions or comments? Please write to us here.