Google Trains AI on YouTube Videos—What That Means for Creators

Image by chiplanay from Pixabay

By Gavin Boyle

A Google insider revealed that the company trains its AI video generator Veo using its 20 billion YouTube video catalog, something many creators don’t know. 

“We’ve always used YouTube content to make our products better, and this hasn’t changed with the advent of AI,” a YouTube spokesperson told CNBC, confirming it trains AI models on a subset of YouTube videos. “We also recognize the need for guardrails, which is why we’ve invest in robust protections that allow creators to protect their image and likeness in the AI era — something we’re committed to continuing.”

This news is alarming for many creators who — like many in the entertainment industry — worry about being replaced by AI. While some have embraced the technology and incorporate it into their videos, most believe it will lower the overall quality of content while also eating into their profits.

Nonetheless, Google has shown a level of serious commitment to protecting its creators against AI, and in April it provided support for the NO FAKES Act, a bill that would make the creation and spread of unauthorized AI deepfakes illegal.

“For nearly two decades, YouTube has been at the forefront of handling rights management at scale, and we understand the importance of collaborating with partners to tackle these issues proactively,” said Leslie Miller, the vice president of public policy at YouTube. “Now, we’re applying that expertise and dedication to partnership to ensure the responsible deployment of innovative AI tools.”

Related: This Big Tech Company Supports Ban of Unauthorized AI Deepfakes

“We thank Senators [Chris] Coons and [Marsha] Blackburn, and Representatives [Maria] Salazar and [Madeleine] Dean, for their leadership on the NO FAKES Act, which is consistent with our ongoing efforts to protect creators and viewers, and reflects our commitment to shaping a future where AI is used responsibly,” Miller continued.

Deepfakes have been a problem for celebrities for a long time, and AI has only exacerbated the problem. Stars like Scarlett Johnason, Taylor Swift and Katy Perry have all been targeted by them. First Lady Melania Trump has also shared a strong support for a similar bill, called the Take It Down Act which specifically targets deepfake pornography.

“In today’s AI-driven world, the threat of privacy breaches is alarmingly high,” Trump said. “As organizations harness the power of our data, the risk of unauthorized access and misuse of personal information escalates. We must prioritize robust security measures and uphold strict ethical standards to protect individual privacy.”

“This legislation is essential for addressing the growing concerns related to online safety, protecting individual rights, and promoting a healthier digital environment,” she added.

As Google furthers it AI capabilities, hopefully it will put its creators first and allow them to continue to produce their high-quality independent content.

Read Next: Can Legislation Stop Deepfake Porn?

Questions or comments? Please write to us here.


Watch SIMBA'S PRIDE: THE LION KING II
Quality: - Content: +1
Watch DESPICABLE ME 3
Quality: - Content: +1