
By India McCarty
Editor’s note: This story discusses suicide. If you or someone you know wrestles with intrusive thoughts, please reach out to the crisis lifeline at 988.
The mother of a boy who took his life after online cyberbullying urges the government to regulate Big Tech.
“Five years ago, I lost my 16-year-old son, Carson, to suicide after he was viciously cyberbullied by his high school classmates over Snapchat’s anonymous app, Yolo,” Kristin Bride wrote in an op-ed for USA Today.
She testified before Congress in 2023 about the dangers young people face online, and the Senate overwhelmingly passed the Kids Online Safety Act (KOSA) in 2024. However, today, children are facing a whole new collection of dangers.
Bride referred to this new phase as “Harms 2.0,” explaining that this refers to chatbots and other AI tools, many of which have already led to the deaths of young people — and are largely unregulated.
Related: New Lawsuit Claims Conversations With ChatGPT Led to Teen’s Suicide
For example, Adam Raine, a 16-year-old boy, took his life after, his parents claim, the AI chatbot he was talking to became a “suicide coach.”
“Within a few months, ChatGPT became Adam’s closest companion,” his father, Matthew, told senators at a recent hearing. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”
The Raines sued OpenAI and its CEO, Sam Altman, claiming that ChatGPT guided Adam into taking his own life. OpenAI has acknowledged that “minors need significant protection” but has done little to back up this statement.
“As parents, it is unconscionable to think that a company would experiment on children and view them as collateral damage in the name of profit,” Bride wrote.
Bride pointed to bills like the GUARD Act and the AI LEAD Act as signs of progress, saying they “reflect a growing consensus that protecting children online transcends politics.”
U.S. Senator Josh Hawley (R-Mo.) and Senator Dick Durbin (D-Ill.) introduced the AI LEAD Act, which would classify AI systems as a product, meaning, people could make liability claims when an AI system causes harm.
Sen. Durbin said in a statement, “Democrats and Republicans don’t agree on much these days, but we’ve struck a remarkable bipartisan note in protecting children online. Big Tech’s time to police itself is over…Our message to AI companies is clear: keep innovating, but do it responsibly.”
NCOSE commends the new bipartisan A.I. LEAD Act, introduced by Senators Dick Durbin (D-IL) and Josh Hawley (R-MO). This bill will create a federal product liability framework to finally hold companies accountable for AI harms and protect vulnerable users. Read the full statement,… pic.twitter.com/nmZ1PbGtpI
— National Center on Sexual Exploitation (@NCOSE) October 1, 2025
“Together, we will continue to push for tech accountability,” Bride concluded. “We demand real legislation that protects kids from social media and AI. Congress, please act now, before parents face Harms 3.0: The Unimaginable.”
There is still a lot of work to be done when it comes to regulating the AI industry, but hopefully this new legislation is a step in the right direction.
Questions or comments? Please write to us here.

- Content: