
By India McCarty
The National Center on Sexual Exploitation called out X (formerly Twitter) for their AI feature that allows users to virtually undress women online.
“New year, same Big Tech playbook. X is further normalizing the sexual exploitation of women with Grok’s new feature allowing pictures of real women to be undressed without their consent,” Dani Pinter, Chief Legal Officer and Director of the Law Center at NCOSE, said in a statement. “This is an egregious violation of women’s privacy and safety.”
When users post photos of themselves online, others on X can ask the platform’s AI “Grok” feature to virtually undress them, creating explicit images.
https://www.instagram.com/p/DTI4hj2jYnX/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==
Related: X Introduces 12+ Explicit AI Chatbot Made to ‘Go Full Erotica’
She continued, “Additionally, Grok’s meager ‘apology’ tweet for reportedly generating virtual child sexual abuse material is an abysmal reaction to a serious, criminal act.”
Pinter explained that this “predictable and avoidable atrocity” was caused by X not “rigorously” policing the Grok feature to ban users that request this kind of content.
“X’s actions are another example of why we need safeguards for AI products,” she continued. “Big Tech cannot be trusted to curb serious child exploitation issues it knows about within its own products. We implore X to take these issues seriously and commit actual resources to stop Grok’s child exploitation problems, and to stop enabling the sexual exploitation of women.”
Should AI be prohibited from creating sexual materials?
Pinter concluded, “Our country’s leaders and laws must prioritize protecting people over products. Our lawmakers must pass reasonable AI regulations to ensure these products are developed and implemented safely.”
In a statement from X, the social media platform said they were “urgently fixing” this problem, adding, “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced. xAI has safeguards, but improvements are ongoing to block such requests entirely.”
David Thiel, a trust and safety researcher, told CNBC, “There are a number of things companies could do to prevent their AI tools being used in this manner. The most important in this case would be to remove the ability to alter user-uploaded images. Allowing users to alter uploaded imagery is a recipe for NCII. Nudification has historically been the primary use case of such mechanisms.”
There is no word yet on what sort of permanent measures X will take to make sure this does not continue to happen, but NCOSE’s recent statement shows that millions around the world are paying attention to this serious issue.
Read Next: X Needs to Fix Its Child Sexual Abuse Material Problem
Questions or comments? Please write to us here.

- Content: