Grok’s creation of harmful content undermines X’s claim to safe harbour protection in India
India
News

Grok’s creation of harmful content undermines X’s claim to safe harbour protection in India

TH
The Indian Express
1 day ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 7, 2026

The Union government’s January 2 notice to X (formerly Twitter) regarding its AI chatbot Grok generating harmful content about women represents far more than routine regulatory action. The Ministry of Electronics and Information Technology (MeitY) initially gave X 72 hours, and then an extension of 48 hours to provide a detailed action-taken report and implement technical safeguards restricting Grok from producing sexually explicit material, warning that failure could jeopardise the platform’s “safe harbour” protections under Indian law. When examined against X’s documented history of contentious engagement with Indian authorities and the accelerating crisis of synthetic content, this directive reveals a critical inflection point in India’s approach to AI governance.

X’s relationship with Indian regulatory authorities has been marked by recurring conflict. In May 2021, Indian police visited Twitter’s Delhi offices after the company marked tweets by BJP spokesman Sambit Patra as “manipulated media”. During the 2021 farmers’ protests, the platform initially complied with government blocking orders, then reversed course — a pattern that would repeat itself. In 2023, former CEO Jack Dorsey accused the Indian government of threatening to shut down offices and requesting censorship of journalists.

The legal battles escalated in 2025 when X filed a case challenging the government’s Sahyog portal — a state-owned digital tool that automatically sends content removal notices to platforms. The Karnataka High Court held that under Section 69A of the Information Technology Act, 2000, the government has authority to block entire user accounts, not just specific tweets, rejecting X’s arguments about procedural fairness.

The Grok controversy now fundamentally differs from previous disputes. MP Priyanka Chaturvedi filed a formal complaint after users demonstrated Grok could be prompted to alter images of women to make them appear in bikinis. More disturbingly, reports flagged instances where the AI generated sexualised images involving minors — an issue X acknowledged resulted from safeguard lapses.

The critical distinction involves attribution of responsibility. When users post illegal content, platforms face difficult moderation decisions. But when Grok produces sexualised content depicting women without consent, it reflects X’s corporate choices about safety mechanisms and deployment standards. X defended Grok by stating outputs are based on information from sources not publicly debunked, and that objectionable content resulted from users’ prompts. This sidesteps the essential question: Why was an AI system released without sufficient guardrails to prevent foreseeable harms?

The government’s threat to revoke safe harbour protections recognises that AI-generated content doesn’t fit intermediary liability frameworks designed for human expression. When platforms deploy generative AI systems, they assume a more direct role in content creation, potentially forfeiting protections premised on mere hosting.

The Grok incident emerges against India’s accelerating struggle with synthetic media. Deepfake cases have surged 550 per cent since 2019, with losses projected at ₹70,000 crore in 2024 alone. In 2023, deepfake videos of celebrities like Rashmika Mandanna went viral, demonstrating how easily synthetic media violates dignity and spreads before moderation mechanisms respond.

India’s regulatory response has evolved rapidly. MeitY’s amendments to the IT Rules, effective November 2025, introduced “synthetically generated information” as a regulated category covering any content generated through algorithmic or AI processes. The amendments require platforms to ensure developers declare if uploaded content is synthetically generated and display clear disclaimers. Crucially, removal of synthetic content no longer depends on court orders — platforms must now remove such content using reasonable efforts, shifting from reactive to proactive regulation. Also AI tools detect deepfakes with only 65-70 per cent accuracy, limiting large-scale identification. Terms like “synthetic information” lack definitional clarity that could lead to inconsistent enforcement. The amendments contain no explicit exceptions for satire, news reporting, or artistic expression — forms of speech central to democratic participation.

The Grok case exposes gaps in liability frameworks designed for human-generated content. When AI systems produce illegal material, who bears responsibility — the platform, the developer, users who craft harmful prompts, or all three? The provisions of the IT Act on obscene content and IPC sections addressing sexual harassment provide grounds for action, but applying these to AI-generated content presents novel interpretive challenges.

The case also reveals how generative AI democratises harmful content creation at unprecedented scale. Unlike individual users producing objectionable material, AI systems can generate thousands of harmful outputs responding to malicious prompts within minutes. This asymmetry between creation and moderation capacity fundamentally alters platform governance dynamics.

The Grok incident should catalyse systematic reform. First, the government needs to bring clear legal frameworks establishing that entities deploying generative AI systems bear primary responsibility for their outputs. Safe harbour protections premised on intermediary neutrality cannot extend to corporate actors making design choices about AI capabilities.

Second, mandatory safety testing before AI deployment becomes essential. The 2025 amendments represent the most comprehensive deepfake regulatory framework globally, integrating definition clarity, transparency requirements, and platform accountability. But comprehensive frameworks require enforcement capacity. India should establish technical standards for AI safety assessments, require independent audits before deployment, and impose significant penalties for releasing AI without adequate safeguards.

Third, the government must ensure the jurisdictional asymmetry between global platforms and national regulation is corrected. When foreign companies deploy AI systems in India, they must meet India-specific safety standards reflecting constitutional values around dignity, consent, and privacy. The Karnataka HC established an important precedent about regulatory sovereignty over platforms operating within Indian jurisdiction.

The government’s notice to X represents more than enforcement — it signals India’s determination to assert regulatory authority over AI systems regardless of corporate resistance. As generative AI becomes infrastructure shaping information access and public discourse, the question of who controls these systems becomes existential for digital sovereignty. X’s pattern of pushing boundaries while deploying inadequately safeguarded AI demonstrates why corporate self-regulation proves insufficient. X’s adversarial relationship also compounds regulatory challenges.

India’s response to Grok will establish precedents far beyond one problematic chatbot. It will determine whether platforms can deploy AI systems in India without adequate accountability, whether AI safety requirements will reflect Indian constitutional values, and whether regulatory authorities can enforce meaningful standards against resistant multinational corporations. The stakes extend to every woman whose dignity could be violated by synthetic content, every citizen exposed to AI-generated misinformation, and every democratic process threatened by manipulated media. X’s troubled relationship with Indian authorities, combined with Grok’s generation of harmful content, illustrates why establishing clear rules and enforcing them consistently becomes not merely regulatory housekeeping but fundamental protection of citizens’ constitutional rights.

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

The Indian Express