AI Is Training on Images of Your Children
By Movieguide® Contributor
Last month, Wired exposed an AI image tool that used images of children without their consent — a situation that highlights the growing need for stronger regulations on AI companies.
“So AI and all of these technologies could be the best things that happen to humanity, helping us solve all of these kinds of really difficult problems, and it could be the worst thing that happens to us. It could put our children at risk, it could put the world at risk…and now’s the time when we need to navigate toward those better outcomes,” AI expert Jamie Metzl told Fox News.
Wired’s article explained that over 170 images of children were “repurposed” and used to train AI, per a report from Human Rights Watch.
“Their privacy is violated in the first instance when their photo is scraped and swept into these datasets. And then these AI tools are trained on this data and therefore can create realistic imagery of children,” said Hye Jung Han, children’s rights and technology researcher at Human Rights Watch. “The technology is developed in such a way that any child who has any photo or video of themselves online is now at risk because any malicious actor could take that photo, and then use these tools to manipulate them however they want.”
Situations like this are only likely to increase as AI becomes even more prevalent.
“We have these big AI systems, these large learning models, and the way they are learning is by collecting all of the digital information they can grab, and the more information — the more data they can grab — the smarter they are going to become, and this means they are going to swallow up the internet,” Metzl added. “That’s why we need to have the right kind of governance and regulations to protect us, it’s not that we should just let these companies run wild, there need to be regulations and those regulations are not in place.”
Thankfully many experts in the AI field agree with Metzl. Last summer, top AI researchers, including Sam Altman — the founder of OpenAI — testified before Congress to the need for strong regulations on the industry.
In response, President Biden signed an executive order in November to protect Americans from the spread of AI-generated deepfakes and misinformation, while funding government agencies to develop a strong defense against the negative aspects of the technology.
While the executive order was a step in the right direction, it is clear that more work needs to be done to steward the development of the technology in a positive direction. This is especially true for its use in creating deepfakes which are being used to blackmail millions.
Movieguide® previously reported:
AI is making it easier for child predators to threaten and abuse children while simultaneously making it harder for law enforcement to identify bad actors.
“The use of AI for child sexual abuse will make it harder for us to identify real children who need protecting and further normalize abuse,” Britain’s National Crime Agency (NCA) Director General Graeme Biggar said in a recent speech. “And that matters because we assess that the viewing of these images—whether real or AI-generated—materially increases the risk of offenders moving on the sexually abusing children themselves.”
“There is also no doubt that our work is being made harder as major technology companies develop their services, including rolling out end-to-end encryption, in a way that they know will make it more difficult for law enforcement to detect and investigate crime and protect children,” he continued.
Using generative AI, sexual predators can now easily create hyper-realistic images and videos of child sexual abuse, which can be easily accessed and shared.