OpenAI’s Sora Claims Deepfake Safety — But Can It Deliver?

A computer chip with the letter ia printed on it

By India McCarty

Sora, OpenAI’s new video-generation app, is adding to the growing problem of “deepfake” video clips.

“The AI industry seems to move really quickly, and first-to-market appears to be the currency of the day (certainly over a contemplative, ethics-minded approach),” Kristelia García, an intellectual property law professor at Georgetown Law, told NPR. 

She pointed out the recent complaint from Dr. Martin Luther King Jr.’s estate after deepfake videos of the civil rights leader started circulating, saying OpenAI has a “asking forgiveness, not permission” policy when it comes to using the real-life images and voices of others.

OpenAI released a statement saying they are working with Dr. King’s estate to “address how Dr. Martin Luther King Jr.’s likeness is represented in Sora generations,” adding, “at King, Inc.’s request, OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”

Related: Here’s What You Need to Know About Deepfake AI Videos

 

OpenAI promised Sora has anti-impersonation safeguards, but Reality Defencer, a company that specializes in identifying deepfakes, told Time they were able to bypass those safeguards within 24 hours, and that “any smart 10th grader” could do the same. 

Reality Defender CEO Ben Colman said the promised safeguards give users a “plausible sense of security,” even though “anybody can use completely off-the-shelf tools” to pass falsified videos off as the real thing. 

“Platforms absolutely know that this is happening, and absolutely know that they could solve it if they wanted to,” Colman said. “But until regulations catch up — we’re seeing the same thing across all social media platforms — they’ll do nothing.”

An OpenAI spokesperson responded to Reality Defender’s claims, saying, “The researchers built a sophisticated deepfake system of CEOs and entertainers to try to bypass those protections, and we’re continually strengthening Sora to make it more resilient against this kind of misuse.” 

OpenAI and Sora have emphasized their commitment to fighting deepfakes and “responding expeditiously to any complaints it may receive.”

“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” OpenAI CEO Sam Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”

While many AI users are excited to start using Sora, Reality Defender’s findings are a warning that the video-generation app isn’t quite as safe as OpenAI claims to be.

Read Next: Could This New Bill End AI’s Use of Copyrighted Material?

Questions or comments? Please write to us here.

Watch REAGAN
Quality: – Content: +1

Watch PRAYER NEVER FAILS
Quality: – Content: +1