Will OpenAI’s Teen Version of ChatGPT Really Be Any Safer?

ChatGPT, artificial Intelligence
Photo by ilgmyzin on Unsplash

By India McCarty

In an effort to safeguard young people using AI, ChatGPT parent company OpenAI announced plans to create for a teen-friendly version of the chatbot. 

“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” OpenAI CEO Sam Altman wrote in a blog post to the company’s website. 

He shared that they will separate users who are under 13 by using “an age-prediction system to estimate age based on how people use ChatGPT.”

“If there is doubt, we’ll play it safe and default to the under-18 experience,” he wrote. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

Related: ChatGPT Creator Sam Altman Appears Before Congress, Urges Lawmakers to Regulate the Industry

Altman also explained that they will put different rules into place for the chatbots that will engage with young people. 

“ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” he wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

Altman concluded, “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

These new AI guidelines for teens come after the Federal Trade Commission launched an inquiry into several tech companies, including OpenAI, to look into how they affect young people. 

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement.

A spokesperson for OpenAI responded, saying, “Our priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved. We recognize the FTC has open questions and concerns, and we’re committed to engaging constructively and responding to them directly.”

Several parents have also sued another AI company, Character Technologies, Inc, for their Character.AI bot, which they claim encouraged their children to attempt or commit suicide. 

“I had no idea the psychological harm that an AI chatbot could do until I saw it in my son, and I saw his light turn dark,” a mother, identified as “Jane Doe,” said during a hearing on Capitol Hill. 

While OpenAI’s attempts to make their programs safer for young people, the technology is still dangerous, for kids and adults alike. It’s important to be aware of what your teen is looking at online and to have conversations about how to stay safe on the internet. 

Read Next: Is ChatGPT Responsible for This Teen’s Death? His Parents Think So

Questions or comments? Please write to us here.

Watch WHEN HOPE CALLS (2025): “Finding our Way” and “Bringing to Light”
Quality: – Content: +4

Watch THE BIGGEST LITTLE FARM: THE RETURN
Quality: – Content: +1