Could X lose legal immunity over Grok AI’s objectionable pictures of women?
India
News

Could X lose legal immunity over Grok AI’s objectionable pictures of women?

TH
The Indian Express
2 days ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 5, 2026

When Grok AI puts out objectionable pictures of women at the prompt of a user hiding behind anonymity, the key question that emerges is this — who should face the consequences?

Should it be Elon Musk, the owner of X which is hosting the content, or Grok AI, which has been created by not only a team of highly paid engineers but also low-wage data annotators, or the users putting in such prompts?

Musk says that anyone using Grok to make illegal content will “suffer the same consequences as if they upload illegal content”. The Indian government, on the other hand, is asking questions of X as to how Grok AI was allowing the creation of objectionable pictures of women, without their consent, with seemingly no safety guardrails in place.

We explain the legal questions at play and why it could lead to action against the social media platform.

For tech companies such as X, the only real legal consequence in India is determined by the safe harbour protections that are available to them under the country’s Information Technology Act, 2000. These protections recognise social media platforms as mere conduits of content that users post, and as a result, have legal immunity from them, as long as they adhere to some prescribed due diligence directives from the government, such as taking down flagged content within a specified timeframe.

But that understanding of what safe harbour means has been upended by the advent of AI services such as Grok, where it might be tricky to ascribe accountability.

Does Grok have the right to unfettered free speech? And what is Grok’s free speech even?

Its code essentially determines what the next word in a sentence, or the next pixel in a picture should be, which is a factor of the underlying dataset it has been trained on. Both the code and the content in the language model is created by humans.

Many would say that X should be held accountable for Grok’s responses. The Indian government is certainly in that camp.

The Indian government expressed “grave concern” over Grok being abused to generate the objectionable pictures. On Friday, it sent a missive to the company saying that it was not adhering to the country’s laws, raising red flags over “serious failure” of safeguard enforcement, and has sought information about technical details related to Grok’s actions.

The government said that the service was being used by people to target women who host or publish their images or videos, through prompts, image manipulation and synthetic outputs, and said that “such conduct reflects a serious failure of platform-level safeguards and enforcement mechanisms, and amounts to gross misuse of artificial intelligence technologies in violation of applicable laws”.

This is being viewed with “grave concern,” as it could violate the dignity, privacy and safety of women and children, “normalising sexual harassment and exploitation in digital spaces, and undermining the statutory due diligence framework applicable to intermediaries operating in India”, the government said.

The notice said that due to Grok’s actions, X was not adequately adhering to provisions of the Information Technology (IT) Rules, 2021, and the Bharatiya Nagarik Suraksha Sanhita, 2023, particularly in relation to obscene, indecent, vulgar, pornographic, paedophilic, or otherwise unlawful or harmful content which are potentially violative of current laws.

IT Minister Ashwini Vaishnaw recently said that social media platforms like X should take responsibility for content they publish and a standing committee has already recommended a tough law to fix accountability of platforms. “The Parliamentary Committee has recommended a strong law for regulating social media. We are considering it,” he said.

Earlier, he has spoken about the potential need to relook at safe harbour protections available to online platforms.

“Shouldn’t platforms operating in a context as complex as India adopt a different set of responsibilities? These pressing questions underline the need for a new framework that ensures accountability and safeguards the social fabric of the nation,” Vaishnaw had said during his National Press Day address in 2024. He added that globally, debates are intensifying over whether the safe harbour provisions are still appropriate, given their role in enabling the spread of misinformation, riots, and even acts of terrorism.

The government talked about reconsidering the safe harbour clause in 2023 during consultations on the Digital India Act, which once implemented will replace the decades old IT Act, 2000. However, the government is yet to issue a draft of the Digital India Bill for public consultation.

Officials in the government indicated that after the Grok AI incident, they were ready to revoke X’s safe harbour protections, meaning the platform could become legally liable for Grok AI’s outputs.

In October, The Indian Express had reported how AI-generated clips and pictures of actors were widely available on platforms like Instagram and X, with the platforms seemingly failing in their responsibility to curb the spread of such content, jeopardising the safety and dignity of women, which could create barriers to their participation in digital spaces.

In December, the IT Ministry had issued an advisory to online platforms, including social media companies, asking them not to host obscene, or vulgar content, calling for “greater rigour” in adhering to provisions of the law. The ministry has also directed platforms to carry out an immediate review of their internal compliance frameworks.

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

The Indian Express