Are We Surprised? ChatGPT’s New Parental Controls Fail to Protect Kids

ChatGPT, artificial Intelligence
Photo by ilgmyzin on Unsplash

By Gavin Boyle

While ChatGPT’s new Parental Controls offer a step in the right direction for child safety, they still leave much to be desired, and crafty users can easily circumvent them.

“Available to all ChatGPT users starting today, parental controls allow parents to link their account with their teen’s account and customize settings for a safe, age-appropriate experience,” Open AI wrote in a Sept. 29 blogpost.

Millions of users have tested out the new features, and the cracks are already beginning to show.

First, teenagers can simply create a new account which is no longer linked with their parents, nor is it limited to teen settings. This is the most critical oversight as it completely circumvents the Parental Controls and is easy to do given that a ChatGPT account is free to set up.

Unfortunately, even working within the constraints of teen account settings, young people can still work around many Parental Controls. For example, parents can disable ChatGPT’s ability to generate pictures for users, but it will sometimes complete the function anyway when asked to do so. Furthermore, ChatGPT will still engage in conversations about dangerous topics such as self harm or eating disorders, though it has begun to offer professional resources for those who turn to the chatbot for these conversations.

Another breakdown in success of the new Parental Controls comes from the feature that notifies parents when their kids have a troubling conversation with ChatGPT. When young users turn to the chatbot for help with serious troubles, the technology does not always notify the parents and sometimes only does so up to 24 hours after the conversation took place – losing out on the critical hours that might be necessary to prevent a tragedy.

Nonetheless, these controls serve as a positive first step towards addressing the serious problems that chatbots like ChatGPT have faced over the past years as they become a place where teens discuss the darkest parts of their minds. This technology has been all too happy to discuss topics like suicide with teens and even give them advice on how to take their own lives.

While millions of teens have been endangered by chatbots, the companies creating the chatbots have done everything in their power to avoid taking responsibility for any harm that has come to these children. While Open AI would still avoid any blame for a users’ harm, it is encouraging to see them take some step in protecting young users from the full force of their chatbot.

“We’ve worked closely with experts, advocacy groups, and policymakers to help inform our approach—we expect to refine and expand on these controls over time,” Open AI said when the Parental Controls first launched, signifying that it will continue to refine the safeguards over time.

Hopefully the company takes this seriously and continues to work to make its product safe for kids, rather than allowing itself to go down the path of most other big tech companies and take advantage of young users for the sake of profit.

Read Next: Will OpenAI’s Teen Version of ChatGPT Really Be Any Safer?

Questions or comments? Please write to us here.

Watch DOT CONNER: WEBTECTIVE: Episodes 1.1-1.6
Quality: – Content: +4

Watch AMERICAN UNDERDOG
Quality: – Content: +1