Meta Takes Another Step to Protect Children Online

a white square with a blue logo on it
Photo by Dima Solomin

By Kayla DeKraker

Meta continues to expand parental controls in a heightened effort to protect children online.

The company will now blur potentially nude content found within Instagram’s direct messages for users under 16 and require parental permission to override the feature.

The feature is part of the new Teen Accounts system which launched in September 2024 and will default any user under 16 to a restricted teen account.

“Since making these changes, 97% of teens aged 13-15 have stayed in these built-in restrictions, which we believe offer the most age-appropriate experience for younger teens,” Meta touted in a recent blog post.

Meta shared that it wants to protect children from unwanted interactions online. “We know parents are worried about strangers contacting their teens — or teens receiving unwanted contact. In addition to the existing built-in protections offered by Teen Accounts, we’re adding new restrictions for Instagram Live and unwanted images in DMs,” the tech giant explained.

Meta emphasized its effort to keep adding features to protect children: “We’re encouraged by the progress, but our work to support parents and teens doesn’t stop here, so we’re announcing additional protections and expanding Teen Accounts to Facebook and Messenger to give parents more peace of mind across Meta apps.”

Meta boasted that as of April 8, there are over 54 million Teen Accounts. That means that millions of children are restricted from harmful content online, which is a win.

Meta continued, “We developed Teen Accounts with parents in mind, and introduced protections that were responsive to their top concerns. We’re continuing to listen to parents, and that includes conducting research to understand how they feel about the changes.”

This is one of several changes for Meta, which has faced scrutiny in the past for not maintaining a safe environment for users.

Earlier this week, it officially ended its fact-checking system, as CEO Mark Zuckerberg announced earlier this year.

In a post to Instagram, he said, “⁠It’s time to get back to our roots around free expression. We’re replacing fact checkers with Community Notes, simplifying our policies and focusing on reducing mistakes. Looking forward to this next chapter.”

Related: Meta Officially Ends Fact-Checking System

Meta is not the only company working to protect children. Recently Apple announced a Child Account option for parents to restrict access to harmful content online.

“We are introducing a new set-up process that will streamline the steps parents need to take to set up a Child Account for a kid in their family,” Apple said. “And if parents prefer to wait until later to finish setting up a Child Account, child-appropriate default settings will still be enabled on the device. This way, a child can immediately begin to use their iPhone or iPad safely, and parents can be assured that child safety features will be active in the meantime.”

It’s encouraging to see big tech taking kids’ safety seriously, but will these changes be enough? Only time will tell.

Read Next: Apple Implements Feature to Blur Nudity, Protect Children


Watch UNSUNG HERO
Quality: - Content: +1
Watch MIGHTY JOE YOUNG
Quality: - Content: +1