Celebrity Law Firm Announces Specialized Program to Fight Deepfakes
By Movieguide® Contributor
A top Hollywood law firm is launching a new program to combat the rising problem of deepfakes.
Venable LLP, whose clients include Taylor Swift and Peyton Manning, announced Takedown, a new program that “proactively identifies and removes illicit and unauthorized deepfake videos and images and pirated content online,” according to Variety.
“This is absolutely needed, especially for talent and high-profile individuals who are the first targets of threat actors,” Venable LLP partner Hemu Nigam explained. “With the current status quo, threat actors not only gain visibility but they also exploit the public who may be consuming [artificial] content without realizing they’re looking at an illicit deepfake video or image or a fake endorsement. So, this can be a double-edged sword with both the celebrity and the public becoming victims.”
Hollywood’s biggest names have been grappling with the issue of deepfakes, from faked explicit images of Taylor Swift that went viral on X to AI-generated audio clips that use Morgan Freeman’s voice.
Movieguide® previously reported on Swift’s deepfakes:
Explicit deepfakes of pop star Taylor Swift have been circulating the internet, calling attention to the dangers of deepfake pornography.
The Guardian defines a deepfake as “The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake.”
The false, sexually explicit images of Swift went viral on X, formerly Twitter, last week, garnering over 27 million views and 260,000 likes within a span of 19 hours, per NBC.
“Since last Sunday, searches for ‘Taylor Swift’ on X have returned the error message, ‘Oops, something went wrong,’” CBS News said. “X blocked the search term after pledging to remove the deepfake AI-generated images from the platform and take ‘appropriate actions’ against accounts that shared them.”
“This is not a new phenomenon: deepfakes have been around for years. However, the rise of generative AI has made it easier than ever to create deepfake pornography and sexually harass people using AI-generated images and videos,” MIT noted.
“Taylor Swift’s viral deepfakes have put new momentum behind efforts to clamp down on deepfake porn,” the source continued. “The White House said the incident was ‘alarming’ and urged Congress to take legislative action.”
Earlier this year, The Hollywood Reporter wrote about a new bill that was introduced that would prohibit the publication and distribution of unauthorized digital replicas.
The legislation, called the No AI Fraud Act, “is intended to give individuals the exclusive right to greenlight the use of their image, voice and visual likeness by conferring intellectual property rights at the federal level,” the outlet reported. “Under the bill, unauthorized uses would be subject to stiff penalties and lawsuits would be able to be brought by any person or group whose exclusive rights were impacted.”
While lawmakers are still working on solidifying legislation that would ban deepfakes, some celebrity victims are already taking matters into their own hands.
Scarlett Johansson lawyered up after OpenAI launched a new AI personal assistant feature that seemingly used an AI-generated version of her voice — after she had already told the company she would not provide her voice for the feature.
“I was shocked, angered and in disbelief that [OpenAI CEO Sam Altman] would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said in a statement about the situation.
Johansson called on Altman to answer her questions about how he created the AI voice that sounded so similar to her own.
“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” the actress said.