Here’s What You Need to Know About Deepfake AI Videos

person holding smartphone
Photo by Alex Ware

By Gavin Boyle

As AI-generated videos become more and more realistic, many people now have trouble differentiating between what is real and what is not; that is a major problem, especially with the ease of sharing videos across social media.

“We’ll find trusting people falling victim to all kinds of scams, big, powerful companies exerting coercive pressures and nefarious actors undermining democratic processes,” Liam Mayes, a lecturer at Rice University’s program in media studies, predicts as the technology gets even better at producing realistic photos and videos. “We might see trust in all sorts of media establishments and institutions erode.”

Since OpenAI released its Sora 2 video-generation tool – the most powerful AI-video generation service available for free – on Sept. 30, people have used it to generate tons of misleading videos about historical figures. Some Sora 2 generated videos shared across social media include Carrie Fisher balancing on a slack line and Marilyn Monroe teaching Vietnamese school children.

Related: AI Deepfakes Impersonate High-Ranking Official—Here’s What You Need to Know

While these examples are harmless, they point towards a more dangerous use case where bad actors could employ the technology to mislead people on important topics. Leading into the 2024 election, many safety experts were worried AI-generated content would play a large part in spreading misinformation about the election. Thankfully, the technology was not quite at a level where it could regularly convince the average person, but some misinformation campaigns were still successful because of the emerging technology.

Less than a week before the New Hampshire primary, for example, 20,000 residents received an AI-generated phone call meant to sound like Joe Biden discouraging them from showing up at the polls.

More recently, the voice of Secretary of State Marco Rubio was used in an attempt to call high ranking government officials and extract sensitive government information from them.

“The actor left voicemails on Signal for at least two targeted individuals and in one instance, sent a text message inviting the individual to communicate on Signal,” the State Department said, adding that multiple other government officials were impersonated as well through email.

While Congress is working to create legislation around AI and how it can be used, it is largely on the corporations creating the technology to make sure it does not harm our society. OpenAI has incorporated multiple failsafes to distinguish when a video was made using Sora 2, including visible watermarks and metadata that can expose a video as a deepfake. The company also plans to create a system where people can opt out of having their likeness created by the tool.

Nonetheless, the creation of this technology opens the door for extreme misuse which could have dire consequences on the future. Hopefully OpenAI and the other AI companies take safety seriously while Congress works on passing laws to hold those accountable who play fast and loose with AI generation tools.

Read Next: Can You Believe Your Eyes? What to Know About Detecting Deepfakes   

Questions or comments? Please write to us here.

Watch THE LEGO MOVIE 2: THE SECOND PART
Quality: – Content: +1

Watch THE BOSS BABY: BACK IN BUSINESS – Season One: Formula for Menace: A Dekker Moonboots Mystery
Quality: – Content: +2