Disney Star Says This About Seeing ‘Terrifying’ Explicit AI Images of Herself
By Movieguide® Contributor
Actress Jenna Ortega is sharing her feelings about AI which stem from an unpleasant discovery she made as a teen.
Though she thinks artificial intelligence “could be used for incredible things,” Ortega told The Times, “I hate AI.”
“Did I like being 14 and making a Twitter account because I was supposed to and seeing dirty edited content of me as a child? No,” the Disney STUCK IN THE MIDDLE star said. “It’s terrifying. It’s corrupt. It’s wrong.”
Having social media in itself was a problem, especially at her young age.
“The first direct message she ever opened on her Twitter account was an unsolicited picture of a man’s genital,” NBC News reported.
“And that was just the beginning of what was to come,” she said. “I used to have that Twitter account, and I was told that, ‘Oh, you got to do it, you got to build your image.”
The AI images and strange messages only got worse when she became more popular for her role in WEDNESDAY. So she decided to delete her Twitter (now X) a few years ago.
“It was disgusting, and it made me feel bad. It made me feel uncomfortable,” she said. “Anyway, that’s why I deleted it, because I couldn’t say anything without seeing something like that. So one day I just woke up, and I thought, ‘Oh, I don’t need this anymore.’ So I dropped it.”
Meta previously ran ads for Perky AI, an app that allows users to dress — or undress — celebrities, including Ortega. The ad shows an image of her at age 16.
Meta did suspend the app and took down the ads. Apple and Google Play also removed it from the app store, but it can still be accessed by anyone who previously downloaded it.
Meta apologized for the ads and said in a statement, “Meta strictly prohibits child nudity, content that sexualises children, and services offering AI-generated non-consensual nude images.”
Ortega is just one of many deepfake and AI victims.
“Sophisticated apps and programs, which ‘undress,’ or ‘nudify,’ photos, and ‘face-swap’ tools that superimpose victims’ faces onto pornographic content have predominantly targeted women and girls,” NBC said.
“More nonconsensual sexually explicit deepfake videos were posted last year than in every other year combined, according to independent research from deepfake analyst Genevieve Oh and MyImageMyChoice, an advocacy group for deepfake victims. The research found that Ortega is among the 40 most-targeted celebrity women on the biggest deepfake website,” NBC reported.
Along with the public and Ortega, other actresses have found themselves targeted. Xochitl Gomez, 17, found sexual deepfakes of herself on social media and has been unsuccessful in removing them. There have been numerous deepfakes of Taylor Swift as well.
Movieguide reported in January:
The false, sexually explicit images of Swift went viral on X, formerly Twitter, last week, garnering over 27 million views and 260,000 likes within a span of 19 hours, per NBC.
X blocked her name from its search engine.
“Since last Sunday, searches for ‘Taylor Swift’ on X have returned the error message, ‘Oops, something went wrong,’” CBS News said. “X blocked the search term after pledging to remove the deepfake AI-generated images from the platform and take ‘appropriate actions’ against accounts that shared them.”
The creators of such content are sometimes even children themselves.
NBC said, “A middle school in California expelled five students in March after they were accused of using generative AI to create and share fake nude images of their classmates — stoking fear among families within the school district. Nude AI-generated deepfakes of students at a New Jersey high school similarly sparked turmoil last year.”