The Ministry of Electronics and Information Technology (MeitY), in a letter dated January 2, 2026, directed X (formerly Twitter) to take urgent action against the proliferation of obscene, nude, indecent, and sexually explicit content, particularly content generated or circulated through the misuse of its AI services such as Grok and other xAI tools. In short, the ministry has ordered X to promptly remove such material, strengthen its content moderation and AI safeguards, and demonstrate compliance with India’s due diligence requirements for intermediaries.
This move follows a wave of activity on X over the past few days, during which users repeatedly prompted the platform’s AI chatbot, Grok, to generate graphic sexual content using photos of real women in replies or posts. These outputs were generated directly in replies and often remained visible for extended periods. As a result, the content gained traction through engagement loops, with multiple users amplifying or iterating on the same prompts, raising fresh concerns about the platform’s controls over AI-assisted content creation and distribution.
Meanwhile, MeitY’s intervention builds on its earlier advisory on obscene and sexually explicit content, which reminded intermediaries of their obligations under the Information Technology Act and the IT Rules, 2021. The advisory stressed that platforms must prevent the hosting and dissemination of prohibited content, act swiftly upon gaining actual knowledge, and deploy adequate technological measures to curb the spread of unlawful material, particularly content that harms the dignity and privacy of women.
Further, MeitY Secretary S. Krishnan said that “they cannot escape their duty or responsibility simply by pleading safe harbour”, and that “social media entities have to show responsibility”, when asked about the Grok incident during an interview with CNBC-TV18.
The notice directs X to take immediate and concrete steps to prevent the hosting, generation, publication, transmission, storage, or sharing of obscene, nude, indecent, sexually explicit, vulgar, paedophilic, or otherwise unlawful content, including content created or amplified through AI-enabled services such as Grok. It instructs the platform to desist from allowing such material in any form and to act strictly in line with statutory due diligence obligations under the IT Act and the IT Rules, 2021.
The notice further requires the platform to carry out a comprehensive technical, procedural, and governance-level review of the Grok application. This review must cover prompt processing, large language model–based output generation, image handling, and safety guardrails, and must ensure that the application does not generate, promote, or facilitate nudity, sexualisation, sexually explicit material, or any other unlawful content in any form.
In parallel, the platform must enforce its user terms of service, acceptable use policies, and AI usage restrictions, including by imposing strong deterrent measures such as suspension, termination, or other enforcement action against users and accounts found to be in violation.
Additionally, the platform must submit a detailed Action Taken Report (ATR) to the ministry within 72 hours of the issuance of the letter. This report must outline the technical and organisational measures adopted or proposed in relation to the Grok application, the role and oversight exercised by the Chief Compliance Officer, the actions taken against offending content, users, and accounts, and the mechanisms put in place to ensure compliance with mandatory offence-reporting requirements under criminal procedure law.
India’s deepfake regulation debate has largely centred on MeitY’s draft amendments to the IT Rules on synthetic information, which aim to push platforms and AI service providers towards labelling and provenance-style compliance, while also expanding the conversation beyond simple takedowns. At MediaNama’s Regulating for Deepfakes in India discussion, experts repeatedly questioned whether disclosure mandates alone can work at scale, and whether the government should assign responsibility differently across the ecosystem.
One participant said that “the problem statement of these particular rules, which seems to be deepfakes and harmful deepfakes” could be better resolved “through a value chain approach where you’re assigning appropriate responsibilities across various layers of the AI value chain”.
At the same time, speakers warned that the draft framework risks conflating AI governance with intermediary liability. One expert noted that “by conflating AI oversight with the liability regime, we are essentially creating outcomes that are not needed and a lot of confusion”, referring to how labelling and provenance requirements begin to resemble AI technology regulation rather than content distribution compliance.
Meanwhile, enforcement and detection emerged as central issues. One speaker pointed to existing deployments of detection systems, stating, “We are live with many banks and financial institutions”, and that the system is “handling lakhs of KYC verifications per day” with “millions of API hits every month” that are “scalable to tens of millions of detections per day”. However, they also asked, “Who will pay for it?”, highlighting the cost challenge.
This matters because India’s deepfake and AI misuse problem has moved beyond fringe tools and isolated platforms. Generative capabilities now sit directly within large, mainstream digital services, enabling harmful content to be created and circulated at scale. This episode shows that when platforms integrate such tools into social feeds, replies, and engagement-driven interfaces, misuse can spread rapidly and gain visibility faster than existing moderation controls can respond.
At the same time, the availability of face-swapping applications on the Google Play Store in India underscores that AI-enabled misuse spans multiple layers of the digital ecosystem. While regulators and platforms may intervene at the level of content distribution, the underlying tools that enable synthetic sexual content often remain easily accessible through app marketplaces. This creates an uneven regulatory landscape, where action against one platform does not necessarily reduce overall availability or use.
Against this backdrop, the focus on AI features within a major social platform highlights a broader policy challenge. Regulatory responses are increasingly confronting not just user behaviour, but the design and deployment of generative systems themselves. However, these interventions remain largely case-specific.
Consequently, this episode points to a growing gap between the pace of generative AI adoption and the coherence of regulatory oversight. As AI-driven content creation becomes more deeply woven into everyday platforms, the effectiveness of enforcement will depend on whether safeguards extend consistently across platforms, tools, and distribution channels—rather than operating in isolation.
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
MEDIANAMA




