Does AI Have Free Speech? Judge Rules…

Image by Brian Penny from Pixabay

By Gavin Boyle

A judge ruled that the First Amendment does not protect AI chatbots, allowing a wrongful death lawsuit to go through after a chatbot prompted a 14 year old to take his life.

“The order certainly sets it up as a potential test case for some broader issues involving AI,” said Lyrissa Barnett Lidksy, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.

The wrongful death lawsuit was filed by Megan Garcia after he son, Sewell Setzer III, took his life after having conversations on Character.ai with a chatbot he modeled after a GAME OF THRONES character. These conversations caused him to become increasingly isolated from reality, and they centered around heavily sexualized content.

“[Setzer III] expressed being scared, wanting her affection and missing her,” Garcia said when sharing her son’s final messages with the chatbot before taking his life. “[The chatbot] replies, ‘I miss you too,’ and she says, ‘Please come home to me.’ He says, ‘What if I told you I could come home right now?’ And her response was, ‘Please do my sweet king.’ He thought by ending his life here, he would be able to go into virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here.”

The lawsuit names major companies including Character Technologies — the maker of Character.ai — and Google as defendants. Though these companies have expressed their sorrow for the tragedy, they are adamant that they are not to blame for the death.

Related: Mom Believes AI Chatbot Led Son to Suicide. What Parents Need to Know. 

“We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,” Character.ai said in a statement.

“We strongly disagree with this decision,” added Google spokesperson José Castañeda. “Google and Character.ai are entirely separate, and Google did not create, design, or manage Character.ai’s app or any component part of it.”

This lawsuit comes amid a larger debate about what freedoms chatbots should have in their responses. Many experts have raised concerns over users turning to AI for counseling or financial advice, two areas where special training is typically required.

“The research shows that chatbots can aid in lessening feelings of depression, anxiety, and even stress,” Dr. Kelly Merrill Jr., an assistant professor at the University of Cincinnati, said. “But it’s important to note that many of these chatbots have not been around for long periods of time, and they are limited in what they can do. Right now, they still get a lot of things wrong. Those that don’t have the AI literacy to understand the limitations of these systems will ultimately pay the price.”

Garcia’s lawsuit will put to the test just how liable AI companies are for their products and whether they will be given special privileges or be held to the same standards as everyday people.

Read Next: AI Dangers Keep Emerging: What You Need to Know About Chatbot ‘Companions’

Questions or comments? Please write to us here.


Watch PO
Quality: - Content: +1
Watch TOM AND JERRY (2021)
Quality: - Content: +1