
By Mallory Mattingly
Meta’s newest feature might be the answer for parents concerned about their teen’s AI use.
On Thursday last week, the tech giant announced that they launched new ways to “support parents as they help their teens navigate AI. This included providing parents with insights into the topics their teen has been discussing with Meta’s AI assistant.”
Any parents using “supervision on Facebook, Messenger or Instagram will now see a new Insights tab within supervision, both in-app and on web. From there, parents will be able to see the topics their teen has been asking Meta AI about in that specific app over the past week. Topics can range from School, Entertainment, and Lifestyle to Travel, Writing, and Health and Wellbeing, among others.”
Related: Man Dies After Going to ‘Meet’ AI Chatbot Created by Meta
This update comes as teens use AI chatbots evermore frequently. Recent research found that 64% of teens say AI chatbots are helpful, but there are major dangers, too. Another study found that 33% of teens “have relationships or friendships with AI companions.”
Tragically, they’ve resulted in teen deaths as well, with Parents reporting that one teen committed suicide after a nearly year-long “virtual emotional and sexual relationship” with a Character.AI chatbot.
Meta took these concerns seriously in their update, saying that “for sensitive issues related to suicide and self-harm, we’re going further.”
“We recently announced that we’re developing new alerts to let parents know if their teen tries to engage in conversations related to suicide or self-harm with Meta AI — and we’ll have more to share on those alerts soon,” they said.
For those parents and teens who are enrolled in supervision, they “will be notified that Instagram will start sending these new alerts to parents, based on their teens’ search activity. Attempted searches that would prompt the alert include phrases promoting suicide or self-harm, phrases that suggest a teen wants to harm themselves, and terms like ‘suicide’ or ‘self-harm.'”
The parent or guardian will receive an alert via email or text with a detailed “full-screen message” about what their teen has asked AI or tried to search on Instagram.
Hopefully this will decrease the risk of teens harming themselves through the use of or relationships with AI chatbots.
“When a young person searches about suicide or self-harm, empowering a parent to step in can be extremely important. The fact that Meta has now built this in is a meaningful step forward and is the kind of change that child safety experts have been pushing for,” Dr Sameer Hinduja, Co-Director of the Cyberbullying Research Center, said in a Meta blog post.
“It’s vital that parents have the information they need to support their teens. This is a really important step that should help give parents greater peace of mind – if their teen is actively trying to look for this type of harmful content on Instagram, they’ll know about it,” Vicki Shotbolt, CEO of Parent Zone, added.
Meta seems to be taking teen safety seriously, but will it be enough? Getting parents more involved in their children’s AI use is certainly an important step in the right direction.
Read Next: Are AI Chatbot Companies Doing Enough to Protect Our Children?
Questions or comments? Please write to us here.


- Content: