
Meta Needs to Fight Deepfake Porn. New Rules Outline How
By Movieguide® Contributor
AI is making it even easier to access and create pornography, as nude, AI-generated images of public figures proliferate across social media.
Meta’s oversight board recently updated its “rules on non-consensual deepfake intimate images,” after explicit AI-generated images that “resemble” two female public figures — one from the United States and the other from India — were not immediately removed from the platform.
“Deepfake intimate images comprise synthetic media digitally manipulated to depict real people in a sexualized way,” the Meta Oversight Board explained. “It is becoming easier to create, with fewer pictures required to generate a realistic image. One report points to a 550% increase in online deepfake videos since 2019, the vast majority of which are sexualized depictions of real individuals and target women.”
Per The Hill, “Meta’s Oversight Board announced in April that it would review two cases about how Facebook and Instagram handled content containing AI-generated nude images of two famous women. The board ruled that both images violated Meta’s rule that bars ‘derogatory sexualized photoshop’ under its bullying and harassment policy.”
One of the images posted was immediately taken down, but the other photo remained up much longer than it should have.
“The board said a user reported the image as pornography but the report wasn’t reviewed within a 48 hour deadline so it was automatically closed. The user filed an appeal to Meta, but that was also automatically closed,” AP News reported.
Meta’s Oversight Board stated that the “original decision to leave the content on Instagram was in error and the company removed the post for violating the Bullying and Harassment Community Standard.”
As a result, the Oversight Board recommended several action steps for Meta to enforce so this doesn’t happen to someone else.
The steps include:
- Move the prohibition on “derogatory sexualized photoshop” into the Adult Sexual Exploitation Community Standard.
- Change the word “derogatory” in the prohibition on “derogatory sexualized photoshop” to “non-consensual.”
- Replace the word “photoshop” in the prohibition on “derogatory sexualized photoshop” with a more generalized term for manipulated media.
- Harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated. For content with this specific context, the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating.
Movieguide® previously reported on the explicit deepfakes of popstar Taylor Swift that circulated the internet earlier this year:
Explicit deepfakes of pop star Taylor Swift have been circulating the internet, calling attention to the dangers of deepfake pornography.
The Guardian defines a deepfake as “The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake.”
The false, sexually explicit images of Swift went viral on X, formerly Twitter, last week, garnering over 27 million views and 260,000 likes within a span of 19 hours, per NBC.
X blocked her name from its search engine.