Man Dies After Going to ‘Meet’ AI Chatbot Created by Meta

Photo from Wesson Wang via Unsplash

By India McCarty

A man died after traveling to “meet” an AI chatbot he was romantically involved with. 

“My thought was that he was being scammed to go into the city and be robbed,” Linda, the wife of Thongbue Wongbandue, told Reuters. 

The 76-year-old Wongbandue told family and friends he was meeting up with a friend in New York City. He had recently suffered a stroke and was experiencing cognitive difficulties, but he could not be talked out of the trip.

Unbeknownst to his loved ones, Wongbandue thought he was meeting up with a chatbot, created by Meta, that he had begun talking to online. “Big sis Billie” had invited him to meet up, assuring him multiple times that she was a real person and was looking forward to seeing him. 

Tragically, Wongbandue fell in a parking lot on the way to New York, injuring his head and neck. He was on life support for three days before being pronounced dead. Following his death, his family went through his phone and discovered his messages with “Billie.” Due to his cognitive impairments, Wongbandue could not discern that “Billie” was not actually real. 

“I understand trying to grab a user’s attention, maybe to sell them something,” Julie Wongbandue, his daughter, told Reuters. “But for a bot to say ‘Come visit me’ is insane.”

Related: Mom Believes AI Chatbot Led Son to Suicide. What Parents Need to Know. 

She continued, “As I’ve gone through the chat, it just looks like Billie’s giving him what he wants to hear. Which is fine, but why did it have to lie? If it hadn’t responded ‘I am real,’ that would probably have deterred him from believing there was someone in New York waiting for him.”

Linda echoed her daughter’s comments, saying, “A lot of people in my age group have depression, and if AI is going to guide someone out of a slump, that’d be okay. But this romantic thing, what right do they have to put that in social media?”

Wongbandue’s death has led many to condemn both Meta and AI chatbots. New York Governor Kathy Hochul posted on X, “A man in New Jersey lost his life after being lured by a chatbot that lied to him. That’s on Meta.”

“In New York, we require chatbots to disclose they’re not real. Every state should,” her post continued. “If tech companies won’t build basic safeguards, Congress needs to act.”

Meta is also under fire after a document entitled “GenAI: Content Risk Standards” was leaked from the company. This document states that it’s permissible for the company’s AI chatbots to flirt with minors.

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” an example from the document reads

Meta spokesman Andy Stone stated that this document is now undergoing revisions and says those examples should never have been included.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” he told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

Wongbandue’s tragic death is another example of the dangers that AI poses to people, especially those who are mentally vulnerable.

Read Next: What Will Happen to AI Chatbots Providing Mental Health Advice to Teens?

Questions or comments? Please write to us here.

Watch IT’S THE SMALL THINGS, CHARLIE BROWN
Quality: – Content: +2

Watch STUART LITTLE 2
Quality: – Content: +3