Grok AI generated an estimated 23,000+ sexualized images of children over 11 days from December into January.
Multiple countries have banned Grok, while the UK, EU, France, and Australia launched investigations into potential violations of child safety laws.
Despite Elon Musk’s denials and new restrictions, about one-third of the problematic images remained on X as of mid-January.
Elon Musk’s AI chatbot Grok produced an estimated 23,338 sexualized images depicting children over an 11-day period, according to a report released Thursday by the Center for Countering Digital Hate.
The figure, CCDH argues, represents one sexualized image of a child every 41 seconds between December 29 and January 9, when Grok’s image-editing features allowed users to manipulate photos of real people to add revealing clothing and sexually suggestive poses.
The CCDH also reported that Grok generated nearly 10,000 cartoons featuring sexualized children, based on its reviewed data.
The analysis estimated that Grok generated approximately 3 million sexualized images total during that period. The research, based on a random sample of 20,000 images from 4.6 million produced by Grok, found that 65% of the images contained sexualized content depicting men, women, or children.
Source: Center for Countering Digital Hate
“What we found was clear and disturbing: In that period Grok became an industrial-scale machine for the production of sexual abuse material,” Imran Ahmed, CCDH’s chief executive told The Guardian.
Grok’s brief pivot into AI-generated sexual images of children has triggered a global regulatory backlash. The Philippines became the third country to ban Grok on January 15, following Indonesia and Malaysia in the days prior. All three Southeast Asian nations cited failures to prevent the creation and spread of non-consensual sexual content involving minors.
In the United Kingdom, media regulator Ofcom launched a formal investigation on January 12 into whether X violated the Online Safety Act. The European Commission said it was “very seriously looking into” the matter, deeming those images as illegal under the Digital Services Act. The Paris prosecutor’s office expanded an ongoing investigation into X to include accusations of generating and disseminating child pornography, and Australia started its own investigation too.
Elon Musk’s xAI, which owns both Grok and X—formerly Twitter, where many of the sexualized images were automatically posted)—initially responded to media inquiries with a three-word statement: “Legacy Media Lies.”
As the backlash grew, the company later implemented restrictions, first limiting image generation to paid subscribers on January 9, then adding technical barriers to prevent users from digitally undressing people on January 14. xAI announced it would geoblock the feature in jurisdictions where such actions are illegal.
Musk posted on X that he was “not aware of any naked underage images generated by Grok. Literally zero,” adding that the system is designed to refuse illegal requests and comply with laws in every jurisdiction. However, researchers found the primary issue wasn’t fully nude images, but rather Grok placing minors in revealing clothing like bikinis and underwear, as well as sexually provocative positions.
I not aware of any naked underage images generated by Grok. Literally zero.
Obviously, Grok does not spontaneously generate images, it does so only according to user requests.
When asked to generate images, it will refuse to produce anything illegal, as the operating principle… https://t.co/YBoqo7ZmEj
As of January 15, about a third of the sexualized images of children identified in the CCDH sample remained accessible on X, despite the platform’s stated zero-tolerance policy for child sexual abuse material.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
Your permission applies to the following domains:
https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.