28 Advocacy Groups Call On Apple And Google To Ban Grok, X Over... (2026)

28 Advocacy Groups Call On Apple And Google To Ban Grok, X Over... (2026)

Elon Musk isn't the only party at fault for Grok's nonconsensual intimate deepfakes of real people, including children. What about Apple and Google? The two (frequently virtue-signaling) companies have inexplicably allowed Grok and X to remain in their app stores — even as Musk's chatbot reportedly continues to produce the material. On Wednesday, a coalition of women's and progressive advocacy groups called on Tim Cook and Sundar Pichai to uphold their own rules and remove the apps.

The open letters to Apple and Google were signed by 28 groups. Among them are the women’s advocacy group Ultraviolet, the parents’ group ParentsTogether Action and the National Organization for Women.

The letter accuses Apple and Google of "not just enabling NCII and CSAM, but profiting off of it. As a coalition of organizations committed to the online safety and well-being of all — particularly women and children — as well as the ethical application of artificial intelligence (AI), we demand that Apple leadership urgently remove Grok and X from the App Store to prevent further abuse and criminal activity."

Apple and Google’s guidelines explicitly prohibit such apps from their storefronts. Yet neither company has taken any measurable action to date. Neither Google nor Apple has responded to Engadget's request for comment.

Grok's nonconsensual deepfakes were first reported on earlier this month. During a 24-hour period when the story broke, Musk's chatbot was reportedly posting "about 6,700" images per hour that were either "sexually suggestive or nudifying." An estimated 85 percent of Grok's total generated images during that period were sexualized. In addition, other top websites for generating "declothing" deepfakes averaged 79 new images per hour during that time.

"These statistics paint a horrifying picture of an AI chatbot and social media app rapidly turning into a tool and platform for non-consensual sexual deepfakes — deepfakes that regularly depict minors," the open letter reads.

Grok itself admitted as much. "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues." The open letter notes that the single incident the chatbot acknowledged was far from the only one.

While Appl

Source: Engadget