AI Toys Are Flooding the Market…But Are They Safe?

Toy
Photo by Izabelly Marques on Unsplash

By Gavin Boyle

As AI toys begin to flood the toy market, safety experts warn that they pose a significant risk for children, having the potential to cause physical or mental harm.

“We don’t know what having an AI friend at an early age might do to a child’s long-term social wellbeing,” warned Dr. Kathy Hirsh-Pasek, a psychologist at Temple University. “If AI toys are optimized to be engaging, they could risk crowding out real relationships in a child’s life when they need them most.”

Experts testing the toys have found that they are prone to many of the problems with AI, sometimes steering the conversation to inappropriate topics unprompted while also answering inappropriate questions without any qualms.

“[This toy told kids] where to find a variety of potentially dangerous objects including knives, pills, matches and plastic bags,” said the U.S. Public Interest Research Group, per ABC News.

Related: Why These US Senators Issued a Major Warning to Toymakers

The problem comes down to the fact that most of these toys implement ChatGPT to interact with their users, despite the fact that ChatGPT does not allow users under the age of 13, and limits its model’s function for those younger than 18. Meanwhile, these toys are aimed at young kids without any change to the chatbot’s model.

As experts sound the alarm on these toys, lawmakers are already moving to regulate how these companies can sell their products.

“Two U.S. senators are demanding accountability from toymakers in an effort to protect children from AI chatbots,” TODAY anchor Willie Geist reported. “Senators Richard Blumenthal and Marsha Blackburn are sending out a bipartisan letter to six different companies.”

“They want to know what those toymakers are doing to prevent AI-powered toys from engaging in harmful conversations with children,” he continued. “The letters stem from a recent NBC investigation that revealed several AI-enabled toys engage in sexual and inappropriate conversations with users.”

While the toys currently on the market seem like hastily put together cash grabs that have no real concern for user safety, more robust products are coming down the line as Mattel announced a partnership with ChatGPT maker OpenAI in June.

“Each of our products and experiences is designed to inspire fans, entertain audiences, and enrich lives through play,” said John Silverman, the Chief Franchise Officer at Mattel. “AI has the power to expand on that mission and broaden the reach of our brands in new and exciting ways. Our work with OpenAI will enable us to leverage new technologies to solidity our leadership in innovation and reimagine new forms of play.”

“With OpenAI, Mattel has access to an advanced set of AI capabilities alongside new tools to enable productivity, creativity, and company-wide transformation at scale,” added OpenAI operating Chief Brad Lightcap.

As AI becomes ubiquitous across a variety of sectors, parents should be extremely careful about how they expose their kids to the technology, especially given that no major AI model has been specifically designed to be safe for child use.

Read Next: Is AI Coming to Barbie?

Questions or comments? Please write to us here.

Watch SUE THOMAS F.B.EYE: THE PILOT
Quality: – Content: +4

Watch THE SUPER MARIO BROS. MOVIE (2023)
Quality: – Content: +1