fbpx

Can Legislation Stop Deepfake Porn?

Photo from Philipp Katzenberger via Unsplash

Can Legislation Stop Deepfake Porn?

 Movieguide® Contributor

AI technology applies to more than just movies, search engines and software; crooks use thousands of people’s likenesses to create deepfake pornography, and there’s no simple way to stop it.

“One of the challenges is the people who create these could be hard to find, and even if it’s illegal, actually sort of getting that content addressed can be really difficult,” said UCLA professor John Villasenor.

“And then on the legislative side, of course, these bills have a very important goal, which is to reduce this very, very problematic use of this technology,” the professor told USA Today. “But one challenge is going to be that they may be subject to court challenges, not because people are opposed to the goal of addressing that particular way of using these technologies, but because sometimes, and I haven’t studied all the details of all the laws, but there’s a risk sometimes in technology regulation that you write a law that does address the problem that you’re trying to address, but it also has sort of collateral damage and other things, and be therefore can be open to some legal challenges.”

Some states have passed laws that target deepfake porn, but there aren’t any federal “comprehensive” bans or regulations against deepfakes in place in the U.S. However, the Identifying Outputs of Generative Adversarial Networks Act is required to research the technology and how regulation may be applied. Congress is also considering some legislation that may help restrict deepfake circulation and help deepfake victims.

Villasenor says legislation gets more complicated when multiple countries are involved.

“You got to have a person in one country who makes a video, and then they post it on a server located in another country and the person depicted in the video is in yet a third country. So you could have sort of three countries involved, and it can be difficult to sort of figure out sort of who’s behind these things,” Villasenor explained. “And then also it can be a bit of a game of whack-a-mole, right? Where if it gets taken down from one server, then somebody can put it up on a different server in a different country, for example.”

“And that can be very hard to chase down. And especially when you have the volume that you likely will have, then it just becomes, you might be able to chase down one of these videos, but if there are hundreds or thousands, all the alliances in the world aren’t necessarily going to be enough to actually do that at the speed that you might want to do that,” he said.

A top Hollywood law firm is launching a new program to combat the rising problem of deepfakes…

READ MORE: CELEBRITY LAW FIRM ANNOUNCES SPECIALIZED PROGRAM TO FIGHT DEEPFAKES

He believes the solution to the international spread is to have server hosts run automated technology that recognizes deepfake AI.

“I think any reputable…social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it,” he said.

He advises young people and their parents to be aware and cautious when they get on the net, especially when they share images. More education on the topic would hopefully prevent more material from being spread.

“I mean, there’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of…disseminating these sorts of videos. And again, that’s not a perfect solution, but that could be part of a solution,” he explained. “Education on the one hand, awareness on the other hand, and then thirdly with the companies themselves having a better suite of automated tools to detect these things. I think those three things together can really make progress, although it’s not going to be perfect.”

As AI becomes more advanced and convincing, the technology used to detect it must also become more advanced. The detection tech will likely always be following AI tech, but it’s still worth developing.

“If you can detect 85 or 90%, then that’s a lot better than detecting zero, right? So it’s still a good idea to have these detection technologies out there. It’s also important to be realistic and understanding that these detection technologies are going to never be perfect,” he said.

“And there’s another risk on the other side, which is that there’s sort of the false negatives and the false positive,” he continued. “There’s also the possibility that some of these detection technologies can inadvertently flag content that isn’t actually a deepfake and identified as deepfake, and that’s something that obviously you want to avoid as well.”

“I’m thinking really more in the political context and things like that. You don’t want an actual video of a real speech by a politician being flagged as a deepfake if it isn’t,” he explained. “That’s another kind of trade-off that people making detection technology have to be very mindful of.”

Even though deepfake can often be detected, one of the key issues with deepfake is that it can do a lot of damage quickly.

“Let’s suppose somebody puts out a deepfake of a politician saying something they never really said. It might take days before sort of the kind of system kicks into gear and identifies that as a deepfake and it gets removed,” Villasenor explained. “But if by that time 500,000 people have seen it, maybe only 50,000 of those people will later read that it was actually deepfake. So you’d still end up with 450,000 people who saw it, never heard that it was a deepfake, and maybe believed it was real.”

There is much more awareness about deepfakes than there was a few years ago, but that’s come with a cost.

“Unfortunately, because it’s happened a lot more in the last year or two, there’s a lot more awareness about it,” the professor said. “One consequence of awareness is you have legislators, policymakers, parents, and young people, I think, much more aware that this is a phenomenon that’s out there than they were a year or so ago. And so I would like to think that that will generate some good results in terms of better detection technologies and better awareness by policymakers, and I hope a dramatic reduction in the amount of this content that gets put out there. But I’ve learned with technology not to predict the future because it’s very hard to predict where technologies are going to go.”