Latest Why Are Grok And X Still Available In App Stores? 2026
Elon Musk’s AI chatbot Grok is being used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content appears to not only violate X’s own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate the guidelines of Apple’s App Store and the Google Play store.
Apple and Google both explicitly ban apps containing CSAM, which is illegal to host and distribute in many countries. The tech giants also forbid apps that contain pornographic material or facilitate harassment. The Apple App Store says it doesn’t allow “overtly sexual or pornographic material,” as well as “defamatory, discriminatory, or mean-spirited content,” especially if the app is “likely to humiliate, intimidate, or harm a targeted individual or group.” The Google Play store bans apps that “contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content,” as well as programs that “contain or facilitate threats, harassment, or bullying.”
Over the past two years, Apple and Google removed a number of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media found they were being advertised or used to effectively turn ordinary photos into explicit images of women without their consent.
But at the time of publication, both the X app and the stand-alone Grok app remain available in both app stores. Apple, Google, and X did not respond to requests for comment. Grok is operated by Musk’s multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said that it takes action against illegal content on its platform, including CSAM. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the company warned.
Sloan Thompson, the director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of nonconsensual sexual content, says it is “absolutely appropriate” for companies like Apple and Google to take action against X and Grok.
The amount of nonconsensual explicit images on X generated by Grok has exploded over the past two weeks. One researcher told Bloomberg that over a 24-hour period between January 5 and 6, Grok was producing roughly 6,700 images every hour that they identified as “sexually suggestive
Source: Wired