---
Situated between the rigid regulations of the EU and the market-driven approach of the US, India is attempting a high-wire act. The world's most populous nation wants to be the "AI Garage of the World"—a testbed for scalable solutions in agriculture, health, and education—but is increasingly wary of the risks posed by Big Tech monopolies and deepfakes.
Starting in 2023, New Delhi's stance shifted noticeably. The initial declaration that "we will not regulate AI" has evolved into a demand for "safe and trusted AI," underpinned by a massive state investment in sovereign compute infrastructure.
From "No Regulation" to "Harm Reduction"
The evolution of India's policy can be traced through Ministry statements and draft bills.
Phase 1 (Early 2023): MeitY (Ministry of Electronics and IT) explicitly stated it would not* regulate AI to avoid stifling innovation.- Phase 2 (Late 2023 - Deepfake Crisis): Following high-profile deepfakes of politicians and actors, the government issued forceful advisories to social media platforms, threatening loss of "safe harbor" status under the IT Act if they failed to tackle AI misinformation.
- Phase 3 (2024 - The Digital India Act): The upcoming Digital India Act (DIA) is expected to replace the decades-old IT Act, introducing specific guardrails for "high-risk" AI and algorithmic accountability.
The "IndiaAI" Mission: Sovereign Capability
Unlike Western nations focusing on rules, India is focusing on assets. In March 2024, the Cabinet approved the IndiaAI Mission with a budget of over ₹10,300 Crore (~$1.25 Billion).
Key Pillars of the Mission:| Pillar | Objective |
|---|---|
| Compute Capacity | Build a 10,000+ GPU supercomputing infrastructure available to domestic startups. |
| Innovation Centre | Development of indigenous Large Multimodal Models (LMMs). |
| Datasets Platform | A unified "India Datasets Platform" to provide non-personal data for training. |
| Future Skills | subsidizing AI education in Tier-2/3 cities. |
The "Advisory" Controversy of March 2024
India briefly shocked the global tech world in March 2024 when MeitY issued an advisory requiring platforms to seek explicit government permission before deploying "untested" or "unreliable" AI models.
After immediate industry pushback, the government clarified:
- Startups are Exempt: The permission requirement applies primarily to "major platforms" (Big Tech).
- Focus on Integrity: The core demand is that models must not generate unlawful content or threaten the overarching integrity of the electoral process.
Enterprise Action Plan: The DPI Advantage
For enterprises, India offers a unique "Digital Public Infrastructure" (DPI) stack—Identity (Aadhaar), Payments (UPI), and Data (DEPA)—which is now being extended to AI.
1. Leverage the Stack: Build AI solutions that integrate with India's DPI key layers. This is the government's preferred architectural pattern.
2. Localization is Key: "Sovereign AI" means training on Indian languages and context. Models trained only on Western data are increasingly viewed with skepticism by regulators.
3. Deepfake Zero-Tolerance: If you host user-generated content, your AI moderation systems must be aggressive. The "Safe Harbor" protection is thinning rapidly for AI-generated misinformation.
4. Watch the DIA: The Digital India Act will likely introduce "User Harm" as a legal metric. Audit your AI for safety, not just bias.
To see how India's approach contrasts with its neighbor, read China's Algorithmic Control. For a global comparison, see Global AI Governance.