fbpx

Landmark Lawsuit Could Take Down AI Porn Industry — Here’s How

Photo from Andras Vas via Unsplash

Landmark Lawsuit Could Take Down AI Porn Industry — Here’s How

By Movieguide® Contributor

The San Francisco attorney’s office has just filed a landmark lawsuit against websites that create and distribute non-consensual AI-generated pornography. 

“We have to be very clear that this is not innovation — this is sexual abuse,” San Francisco City Attorney David Chiu said in a statement given to KQED. “This is a big, multi-faceted problem that we, as a society, need to solve as soon as possible. We all need to do our part to crack down on bad actors using AI to exploit and abuse real people, including children.”

The San Francisco Standard reported that the lawsuit “targets several companies based in the U.S. and abroad as well as 50 unnamed John Doe defendants who operate popular ‘nudifying’ websites that let users submit images of clothed victims.”

During a press conference, Chiu described the situation in greater detail, saying, “These images are used to bully, humiliate, and threaten women and girls. These websites allow users to upload photos of real, clothed individuals. AI technology will then ‘undress’ these persons in the photo, creating pornographic images.”

“While profiting off this content, these website operators have violated a plethora of state and federal laws banning deepfake pornography, revenge pornography, [and] child pornography,” he continued. 

Chiu and his team believe this is the first government lawsuit of its kind and hope that it will get rid of these websites, where it is estimated that millions of deepfake pornographic images are created each year. 

Movieguide® previously reported on celebrities’ efforts to combat deepfake explicit images:

A top Hollywood law firm is launching a new program to combat the rising problem of deepfakes. 

Venable LLP, whose clients include Taylor Swift and Peyton Manning, announced Takedown, a new program that “proactively identifies and removes illicit and unauthorized deepfake videos and images and pirated content online,” according to Variety. 

“This is absolutely needed, especially for talent and high-profile individuals who are the first targets of threat actors,” Venable LLP partner Hemu Nigam explained. “With the current status quo, threat actors not only gain visibility but they also exploit the public who may be consuming [artificial] content without realizing they’re looking at an illicit deepfake video or image or a fake endorsement. So, this can be a double-edged sword with both the celebrity and the public becoming victims.”

Hollywood’s biggest names have been grappling with the issue of deepfakes, from faked explicit images of Taylor Swift that went viral on X to AI-generated audio clips that use Morgan Freeman’s voice. 

Earlier this year, The Hollywood Reporter wrote about a new bill that was introduced that would prohibit the publication and distribution of unauthorized digital replicas. 

The legislation, called the No AI Fraud Act, “is intended to give individuals the exclusive right to greenlight the use of their image, voice and visual likeness by conferring intellectual property rights at the federal level,” the outlet reported. “Under the bill, unauthorized uses would be subject to stiff penalties and lawsuits would be able to be brought by any person or group whose exclusive rights were impacted.”