
By Michaela Gordoni
ChatGPT will eventually launch a new age detection tool that will help protect minors. It’s a step in the right direction, but time will tell if it offers all the protection kids really need.
“Nearly 3/4 of teens say they’ve talked to an AI companion in the past year,” Gayle King said on CBS Mornings. “Chatbots…are not specifically designed for kids, and there have been several lawsuits against AI companies in the wake of multiple suicides. The Federal Trade Commission is also investigating some of these companies about the potential harm to children.”
ChatGPT is “building in a new age prediction system so that it will identify people who are under the age of 18,” said Dr. Marylynn Wei, a psychiatrist and mental health consultant. “The minimum age to use ChatGPT is 13. So the age predictive system will help parents know you need parental approval to get into ChatGPT.”
There will be a linked parental account so that parents may be notified if their child is using the chatbot to talk about crises, difficulties, and inappropriate content.
ChatGPT shared in a blog, “We’re building toward a long-term system to understand whether someone is over or under 18, so their ChatGPT experience can be tailored appropriately. When we identify that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.”
Related: AI Agrees With Everything You Say, and That’s a Problem, Especially for Kids
CyberGuy added parents can also turn off certain features and shape ChatGPT in how it responds to their kids’ questions.
“You can tell the AI to erase the history of the child’s chat so that you know maybe it’s not fed certain things,” King said.
“Teens get around this by changing the age their age and their profile.” So the new tool “is an age prediction system built within the algorithm,” Dr. Wei explained.
Meta and YouTube have already rolled out similar age-prediction tools. Roblox is also working on implementing a similar tool.
Dr. Wei recommends parents watch out for “if… your child is being more socially withdrawn and spending less time with their friends than before. Another thing is that it can show up in schools, like their grades are dropping and sometimes also if they’re just appearing more shut down and spending a lot of time on their devices and with AI,” she said.
“I think it’s a movement in the right direction,” Dr. Wei said of the tool, “but there’s a lot more research to be done. It’s really the research that is really catching up to the technology right now, but I think it’s a helpful direction.”
She notes that younger teens are more susceptible to trusting AI, which can lead to unhealthy emotional dependence and overtrust.
Parents need to know “not just the kind of AI platform they’re using, but also how they’re using AI is really critical,” Dr. Wei says. “So, if teens are using it for research, for school, or to write papers, these are all really good benefits. But if they’re finding that they’re turning to AI in crisis situations or for emotional support, that’s where we see a lot of the risk come from.”
“Monitoring that is so hard, but having an open collaborative conversation is really the key, but there’s not really a perfect system,” she said. “I think that’s why these safety guardrails that is now built into the platform, that’s going to be helpful.”
That’s “really useful stuff,” CBS Mornings host Tony Dokoupil added, “especially in a world where about one in three teens find conversations with AI companions as satisfying or more satisfying than talking to a real-life teen.”
Regardless of what kids are using online, parents need to understand they have a permanent role in guiding and protecting their kids, and that especially applies to online platforms like AI chatbots and social media.
Read Next: Is ChatGPT Use Becoming More Common Among School Kids?
Questions or comments? Please write to us here.