
By Shawn Smith
Who doesn’t love the cute AI videos of babies lip synching “Pretty Little Baby” (RIP Connie Francis) or ASMR videos with glass fruit being sliced with a knife?
But as with any technology, there are those who use AI for not so benign purposes. For example, last year a businessman was tricked into paying out $25 million when he was convinced that he was in a conference call with several colleagues and supposed Chief Financial Officer. In reality, they were all deepfake imposters.
Then there is a barrage of deepfake videos that spread misinformation, making it harder to tell what is AI-generated material and what is real.
“During that gap between an event happening and authorities or media having any sort of confirmation,” Chief Investigator Officer of Get Real Emanuelle Saliba said, “that’s where we’re seeing all of this AI-Generated Content start to spread, and before it’s debunked, before it’s verified, it’s gaining millions of views.”
She used the example of a video of Israel bombing Iran’s Evin prison. While the attack did happen, the video that made its way on to several news outlets was AI-generated.
“Some have clear AI tells if you’re looking closely but realistic enough to fool those quickly scrolling,” Emilie Ikeda of NBC reported.
To help combat the rise of misleading and even sometimes harmful forms of AI-generated material, Google launched SynthID in 2023 that embeds an invisible watermark into AI-generated media. Initially available just for images, it later expanded to AI generated text, audio and video.
Though definitely a step in the right direction to help in detecting deepfakes, some warn there are limitations to the technology.
Related: AI Is Training on Images of Your Children
“I think we need to be very careful to ensure that watermarks are robust against tampering and that we do not have scenarios where they can be faked,” Peter Slattery, Ph.D., MIT FutureTech, said, who researches AI and its risks. “The ability to fake watermarks could make things worse than having no watermarks as it would give the illusion of credibility.”
While SynthID imbeds watermarks on AI generated material, the Coalition for Content Provenance and Authenticity (C2PA), will watermark authentic material that will help users to track the origins of various content.
“This is important for two reasons. First, it gives publishers like the BBC the ability to share transparently with our audiences what we do every day to deliver great journalism,” the BBC wrote. “It also allows us to mark content that is shared across third party platforms (like Facebook) so audiences can trust that when they see a piece of BBC content it does in fact come from the BBC.”
While no system to combat deepfakes is bulletproof, it’s about staying vigilant and aware about the content that is out there.
As Saliba reminded viewers, “Stay skeptical. Try to verify the content.”
Read Next: Christians More Suspicious of Artificial Intelligence, New Survey Reveals
Questions or comments? Please write to us here.