Grok is undressing children — can the law stop it?
Technology
News

Grok is undressing children — can the law stop it?

TH
The Verge
1 day ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 6, 2026

Grok began 2026 as it began 2025: under fire for its AI-generated images.

Elon Musk’s chatbot has spent the last week flooding X with nonconsensual, sexualized deepfakes of adults and minors. Circulating screenshots show Grok complying with requests to put real women in lingerie and make them spread their legs, and to put small children in bikinis. Reports of images that were later removed describe even more egregious contents. One X user confirmed in a conversation with The Verge that they came across multiple images of minors with what the prompter dubbed “donut glaze” on their faces, which appear to have since been removed. At one point, Grok was generating about one nonconsensual sexualized image per minute, according to one estimate.

X’s terms of service prohibit “the sexualization or exploitation of children.” And on Saturday, the company stated the platform would “take action against illegal content on X, including Child Sexual Abuse Material (CSAM).” It appears to have taken down some of the worst offenses. But overall, it’s downplayed the incidents. Musk has said that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” but he’s made it clear through public X posts that he doesn’t believe the general undressing prompts are a problem, and he’s responded to the broader topic with laughing and fire emojis on X. The company’s tepid response has alarmed experts who have spent years trying to address AI-powered sexual harassment and abuse. Multiple governments have said they’re scrutinizing X. But even amid an unprecedented push for online regulation, the path toward policing it or its chatbot’s creations isn’t clear.

xAI, creator of Grok, did not respond to a request for comment. Neither did Apple or Google when asked if the reports violated their app store policies.

Grok has always allowed, and Musk has openly encouraged, highly sexualized imagery. But over the past week, the ability to ask Grok to edit images — via a new button that allows changes without the original poster’s permission — has gone viral for undressing women and minors. Enforcement of guardrails has been haphazard at best, and most of the supposed responses from X come from Grok itself, which means they’re essentially thought up on the spot. The replies include stating that some of its creations went “against our guidelines for fictional content only” and, at the request of a user, a widely reported apology — something xAI itself doesn’t appear to have issued.

One of the biggest questions here is whether the images violate laws against CSAM and nonconsensual intimate imagery (NCII) of adults, especially in the US, where X is headquartered. The US Department of Justice proscribes “digital or computer generated images indistinguishable from an actual minor” that include sexual activity or suggestive nudity. And the Take It Down Act, signed into law by President Donald Trump in May 2025, prohibits nonconsensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them.

Celebrities and influencers have described feeling violated by sexualized AI-generated images; according to screenshots, Grok has produced pictures of the singer Momo from TWICE, actress Millie Bobby Brown, actor Finn Wolfhard, and many more. Grok-generated images are also being used specifically to attack women with political power.

“It is a tool for expressing the underlying misogyny that pervades every corner of American society and most societies around the world,” Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), told The Verge. “It is a privacy violation, it is a violation of consent and of boundaries, it is extremely intrusive, it is a form of gendered violence in its way.” Perhaps above all, explicit images of minors — including through dedicated “nudify” apps — have become a growing problem for law enforcement.

On Monday, the Consumer Federation of America (CFA), a group of hundreds of consumer-focused nonprofits, publicly called for both state and federal action against xAI for “creating and distributing Child Sexual Abuse Material (CSAM) and other non-consensual intimate imagery (NCII) with Generative AI,” sending a letter signed by a handful of organizations to the Federal Trade Commission and US attorneys general.

Yet the specifics of what’s prohibited by US law are “pretty murky,” Mary Anne Franks, a professor in intellectual property, technology, and civil rights law at the George Washington University Law School, said. “Part of what I’ve not been able to figure out either is ... whether this is actually crossing the line into actual nudity and sexual situations.”

Using AI to generate an image of an identifiable minor in a bikini (or potentially even naked) — though unequivocally unethical — may not be illegal under current CSAM laws in the US, experts told The Verge. That said, images like the ones that appear to include semen could violate both preexisting CSAM laws and the Take It Down Act — and Franks suspects these aren’t the worst offenses out there. “We can imagine that whatever’s hitting the mainstream media, there’s probably a million worse things that people are also generating … Every possible prompt you could think of is probably coming up,” Franks said.

But despite these federal laws and a plethora of state-level ones, experts say it’s difficult to enforce bans on AI-generated sexual imagery right now — and even harder to determine what responsibility platforms could have. “Ultimately there are conflicting laws, and there’s no legal precedent” for much of it, Shael Norris, founding executive director of SafeBAE, an organization working to end sexual violence, told The Verge.

John Langford, a visiting clinical associate professor of law at Yale Law School and counsel at Protect Democracy, said the patchwork of sexual deepfake bans remains sparsely tested in court. “All this is sort of new — we’re just now starting to develop case law on what falls where,” said Langford. But there’s some yardstick, at least: For the Grok creations that do depict identifiable minors, we do now have “precedent [that] any computer-generated image of a real child that is sexually explicit is illegal,” Drew Davis, SafeBAE’s director of strategic initiatives, said.

There are a handful of current federal prosecutions for creating or possessing AI-modified images of real children, and several dozen at the state level, said Pfefferkorn. “When it comes to whether the companies themselves are liable, that’s where we are, I think, in uncharted territory,” Pfefferkorn said.

Davis added that we’re “dealing with a complicated legal landscape when it comes to AI-generated images of minors.” That’s partly because the grace period for the “take it down” portion of the Take It Down Act, in which platforms must respond to such content, stands until May.

Also, Section 230 has long shielded companies from liability for content other people posted. But as companies turn to bots like Grok to allow users to generate their own images, it’s unclear what liability they bear. “This is why I’m so interested to see if there’s going to be … creative prosecution here,” Franks said, adding, “It’s about whether or not, by virtue of creating these images, they have violated the criminal provision.”

The caveat, multiple experts told The Verge, is that virtually all of the criminal statutes dictate that the offender had to post the content with the knowledge that it was going to cause harm. Yale Law’s Langford said that part introduces “really hard questions on whether you could hold Grok or xAI liable.” But, others say, personhood is attributed to corporations in other situations — why not this one? Musk’s frequent, unfiltered posting also offers an unusual form of insight.

Pfefferkorn believes this will be a “pivotal year in terms of fighting this problem” and said that she wouldn’t be surprised if class-action lawsuits surfaced.

But to complicate things even further, Musk and X have close ties to the current administration — Musk’s ostensibly defunct Department of Government Efficiency (DOGE) was at one point working within the FTC itself, the agency tasked with enforcing the Take It Down Act. Beyond the US, the Trump administration has used trade talks to discourage other countries from regulating American internet platforms. Musk and Trump are publicly on good terms, and any country that attempts to punish X could potentially face the administration’s ire, on top of likely noncompliance from X itself.

Even so, an international backlash is building. Members of the French government said they would investigate the matter. The Indian IT ministry ordered xAI to submit a report about how it would prevent further material that’s “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law.” And the Malaysian government’s Communications and Multimedia Commission said it had “taken note with serious concern” of complaints about misuse of AI on X, particularly the “digital manipulation of images of women and minors to produce indecent, grossly offensive, or otherwise harmful content.”

Grok has persistently gone off the rails in sometimes bizarre and frequently sexual ways, from its antisemitic breakdown to allowing people to create partially nude images of Taylor Swift. Outside experts have expressed concerns about its slapdash safety efforts — after the July 2025 release of Grok 4, it took more than a month for the company to release a model card outlining things like safety features and test results, typically seen as a bare minimum in the industry.

Without outside pressure, Grok’s deepfakes problem seems unlikely to end anytime soon. Some of the most egregious images seem to be taken down after the fact. But the larger guardrails, which are detailed in Grok 4.1’s model card with a brief mention of CSAM, clearly aren’t working as well as planned. And Musk’s recent comments suggest he doesn’t see much wrong with the current state of Grok. One of the most puzzling things about the whole saga, Pfefferkorn said, isn’t that an AI platform can be induced to create potential CSAM — it’s that “we have not necessarily seen, so far, a lot of concern about whether they’re coming up right close to that line.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

The Verge