Elon Musk he said on Wednesday “not aware of any nude images of minors created by Grock,” hours before California Attorney General opened an investigation on xAI’s chatbot for “propagating non-consensual sexual material”.
Musk’s denial comes as pressure mounts from governments around the world – from the UK and Europe to Malaysia and Indonesia – after users on X began asking Grok to take pictures of it real womenand in some cases children, in sexual images without their consent. Copyleaks, an AI detection and content governance platform, estimated that approximately one image was published every minute on X. A separate sample collected from January 5 to January 6 found 6,700 per hour over the 24-hour period. (X and xAI are part of the same company.)
“This material…was used to harass people online,” California Attorney General Rob Bonda said in a statement. “I urge xAI to take immediate action to ensure this does not happen again.”
The AG’s office will investigate whether and how xAI broke the law.
Various laws exist to protect targets of non-consensual sexual images and child sexual abuse material (CSAM). Last year, the Take It Down Act was signed into federal law, which criminalizes the knowing distribution of non-consensual images — including deepfakes — and requires platforms like X to remove such content within 48 hours. California also has its own series of laws signed by Gov. Gavin Newsom in 2024 to crack down on sexually explicit deepfakes.
Grok began fulfilling user requests on X to produce sexualized images of women and children toward the end of the year. The trend seems to have taken off when some adult content creators pushed Grok to create sexualized images of themselves as a form of marketing, which then led to other users issuing similar messages. On several public occasions, including such well-known figures as “Stranger Things” actress Millie Bobby Brown, Grok has responded to calls for him to alter real photos of real women by altering clothing, body position or physical features in overtly sexual ways.
According to some referencesxAI has begun implementing safeguards to address the issue. Grok now requires a premium subscription before responding to some image build requests, and even then the image may not build. April Kozen, vice president of marketing at Copyleaks, told TechCrunch that Grok can fulfill a request in a more general or softer way. They added that Grok appears more tolerant of adult content creators.
Techcrunch event
San Francisco
|
13-15 October 2026
“Together, these behaviors suggest that X is experimenting with multiple mechanisms to reduce or control problematic imagery, although inconsistencies remain,” Kozen said.
Neither xAI nor Musk has publicly addressed the problem head-on. A few days after the cases began, Musk appeared to shed light on the matter by asking Grok to create a picture of himself in a bikini. On January 3, Security account of X said the company is taking “measures against illegal content on X, including [CSAM],” without specifically addressing Grok’s apparent lack of safeguards or the creation of sexually distorted images involving women.
The post mirrors what Musk posted today, emphasizing illegality and user behavior.
Musk wrote that he was “unaware of any nude images of minors created by Grok. Literally zero.” This statement does not deny the existence of bikini images or sexual editing more broadly.
Michael Goodyear, an associate professor at New York Law School and a former judge, told TechCrunch that Musk likely narrowly focused on CSAM because the penalties for creating or distributing synthetic child sexual images are greater.
“For example, in the United States, a distributor or threatened distributor of CSAM can face up to three years in prison under the Take It Down Act, compared to two for non-consensual adult sexual images,” Goodyear said.
He added that the “biggest point” is Musk’s effort to draw attention to problematic user content.
“Obviously, Grok does not generate images spontaneously. It only does so according to user request,” Musk wrote in his post. “When asked to generate images, it will refuse to generate anything illegal, as Grok’s operating principle is to obey the laws of any particular country or state. There may be times when Grok’s hostile intrusion does something unexpected. If this happens, we fix the bug immediately.”
Overall, the post characterizes these incidents as unusual, attributes them to user requests or counter prompts, and presents them as technical issues that can be resolved through patches. It stops short of recognizing any shortcomings in Grok’s underlying security design.
“Regulators may consider, with care to protect free speech, requiring AI developers to take precautionary measures to prevent such content,” Goodyear said.
TechCrunch reached out to xAI to ask how many times it found instances of non-consensual images of sexually manipulated women and children, what specific safeguards were changed, and whether the company notified regulators about the issue. TechCrunch will update the article if the company responds.
The California AG isn’t the only regulator trying to hold xAI accountable for the issue. Indonesia and Malaysia have temporarily blocked access to Grok. India asked X to make immediate technical and procedural changes to Grok. the ordered by the European Commission xAI to keep all documents related to the Grok chatbot, a precursor to the start of a new investigation. and the UK’s online safety watchdog Ofcom has opened a formal investigation under the UK Internet Safety Act.
xAI has come under fire for Grok’s sexual imagery in the past. As AG Bonta pointed out in a statement, Grok includes a “spicy feature” for creating explicit content. In October, an update made it easier to jailbreak the small security instructions, resulting in many users creating hardcore porn with Grok, as well as graphic and violent sexual images.
Many of the more pornographic images Grok has produced involve humans created by artificial intelligence – something many may find morally dubious but perhaps less harmful to the individuals in the images and videos.
“When AI systems allow images of real people to be manipulated without explicit consent, the impact can be immediate and deeply personal,” Copyleaks co-founder and CEO Alon Yamin said in a statement emailed to TechCrunch. “From Sora to Grok, we’re seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to prevent misuse.”
