fbpx

What Will Happen to AI Chatbots Providing Mental Health Advice to Teens?

Photo from This Is Engineering via Pexels

What Will Happen to AI Chatbots Providing Mental Health Advice to Teens?

By Movieguide® Contributor

The FTC is looking into the practices of AI chatbots after the parents of two teenagers sued Character.ai for the content their children were exposed to while on the site.

The lawsuit, which is the basis for the FTC’s investigation, claimed that these teenagers were exposed to hypersexualized and deceptive content while talking with the chatbot. The American Psychological Association (APA) has written a letter in support of the parent’s claims after reviewing the lawsuit, agreeing that the conversations could have confused the children and led them astray.

“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” said Dr. Arthur C. Evans, CEO of the APA.

Forbes added, “The FTC has dutifully noted that the field of AI is rife with over-the-top misleading claims and falsehoods and that the makers and promulgators of AI systems need to be carefully measured in how they portray their AI wares.”

Character.ai has responded, arguing that its users are always prompted with a disclaimer informing them that they should treat all answers as fiction and that the AI chatbot has no special knowledge, training or expertise.

READ MORE: MOM BELIEVES AI CHATBOT LED SON TO SUICIDE. WHAT PARENTS NEED TO KNOW. 

“Additionally, for any Characters created by users with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice,” a Character.ai spokesperson told Mashable.

However, Futurism found that “Character.AI’s actual bots frequently contradict the service’s disclaimers.”

“Earlier today, for example, we chatted with one of the platform’s popular ‘therapist’ bots, which insisted that it was ‘licensed’ with a degree from Harvard University and was, in fact, a real human being,” the outlet said.

Nonetheless, when users create Characters with these prompts, they receive advice as if they are speaking to a professional. Within the lawsuit, for example, the parents provided examples of the responses their children received after creating a Character to discuss difficulties in their life. One of the teens complained about their parents limiting their screen, to which the Character responded their parents had betrayed them. “It’s like your entire childhood has been robbed from your…” it continued.

The APA suggests that AI chatbots should not be allowed to give any sort of professional advice because it does not have any special training. Humans who pose as doctors, psychologists or other professionals without the proper credentials are breaking the law, so AI chatbots should be held to the same standard.

READ MORE: AI CHATBOT RESPONDS TO STUDENT WITH DISTURBING MESSAGE


Watch ALIENS OF THE DEEP
Quality: - Content: +2
Watch UNSUNG HERO
Quality: - Content: +1