India’s proposed smartphone security rules, including source-code access and system-level controls, have unsettled device makers and privacy advocates

While source code does not contain user data, access to operating system code can reveal how data is handled and secured

Industry experts argue the proposals are impractical, globally out of step, and may create new cybersecurity risks

Over a month ago, the Indian government mandated that every new phone sold in the country would come preloaded with Sanchar Saathi, a state-backed app to curb telecom fraud and trace stolen devices.

The move sent shockwaves through the Indian smartphone ecosystem. What followed next was a swift pushback from the industry, following which the central government quietly scrapped the order.

As the new year began, another flashpoint emerged. According to a media report, the government is considering a sweeping set of 83 security standards for smartphone makers. The directives include mandatory source code disclosure, 12-month log retention, and the government’s pre-approval for updates at the level of operating systems.

While the Ministry of Electronics and Information Technology (MeitY) has denied any plan to forcibly access source code and says it prefers ‘collaborative engagement’ with the industry, the episode has once again unsettled device makers and privacy advocates.

With over 800 Mn smartphone users in the country, the device has become the most intimate gateway to citizens’ digital lives — housing communications, financial data, location history, consumption patterns, and even political and social preferences.

But does access to source code amount to access to user data? On paper, the answer is no. Source code does not contain live user information. However, it does reveal the logic governing how data is collected, processed, encrypted, stored, and transmitted.

Access to operating system code can expose how permissions are enforced, how encryption keys are managed, where vulnerabilities may exist, and what triggers security controls. In other words, while no personal data may be directly viewed, such access provides indirect visibility into the architecture of users’ digital lives.

“Source-code access can create indirect visibility into data-handling architectures, even if no live data is accessed. Whether this becomes a privacy concern depends entirely on how narrowly such access is scoped and governed,” said Sachin Dhawan, associate director at public policy think-tank The Dialogue.

Whether this becomes a privacy risk depends entirely on governance. From a constitutional standpoint, any such measure must meet the standards laid down by the Supreme Court in the Puttaswamy judgment: legality, necessity, and proportionality. That means access must be backed by clear statutory authority, serve a legitimate aim, and be narrowly tailored to that aim, he added.

Legal experts argue that any compliant framework must be backed by a clear law that limits access to source code strictly to vulnerability and security audits. It would also need to incorporate core data-protection principles such as purpose limitation, prohibit secondary uses like profiling or surveillance, and place oversight in the hands of an independent authority.

The technical safeguards would need to be just as stringent.

Source code, if reviewed at all, should only be examined in secure facilities, on a read-only basis, for a limited duration, with exhaustive audit trails recording who accessed what and when. Copying, storing, or reusing code would have to be strictly prohibited, and reviews would need to be restricted to specific security components — not entire operating systems.

Even then, experts warn of unintended consequences.

Centralising vast amounts of proprietary source code from multiple operating systems could create a cybersecurity paradox. Rather than strengthening security, it could turn government repositories into high-value targets for hackers.

Moreover, even if the government were granted access to source code, would it actually solve the problem it is trying to address?

Well, the devil lies in the details. Smartphone software is not static. The version of an operating system submitted for certification or security testing is rarely identical to the one eventually shipped from factories across millions of devices. Variations can arise from hardware configurations, regional customisations, last-mile patches, and carrier-level integrations. Post-sale software updates, now a routine and continuous process, can further alter device behaviour long after a phone reaches a user’s hands.

“In reality, no single entity, not even the original equipment manufacturer (OEM), can exhaustively test every software version across every device variant in circulation,” said Neil Shah, the cofounder at Counterpoint Research.

The breadth of the proposed controls has alarmed industry bodies. According to a Reuters report, the Mobile and Electronics Association of India (MAIT) submitted a confidential response urging the government to drop the proposals altogether.

“This is not possible due to secrecy and privacy,” MAIT said, arguing that major markets across the EU, North America, Australia, and Africa do not mandate compulsory source-code access. Instead, cybersecurity assurance in these regions relies on independent certification, standards-based testing, and third-party audits.

The proposals also include automatic and periodic malware scanning on phones, alongside a requirement to inform the National Centre for Communication Security about major updates and patches before rollouts. While framed as safeguards, such measures raise fears of persistent system-level monitoring, particularly in the absence of strong legal firewalls.

“The proposals seek to micromanage how users interact with their own devices, effectively eroding the concept of ownership. This is reminiscent of a ‘nanny state’ where, for instance, the mandate for ‘tamper detection’ warnings against ‘rooting’ or ‘jailbreaking’ and ‘anti rollback protection’ criminalises advanced users who wish to customise their devices or extend the life of older hardware. This directly contradicts a user’s ‘right to repair’ and modify the smartphones owned by them,” the Internet Freedom Foundation said in a statement.

Phone makers, platform companies, and cloud service providers are unlikely to accept deep government scrutiny of their proprietary systems unless there are strong, legally binding safeguards in place.

Today, big techs have enormous influence over people’s digital lives. Through algorithms and targeted content, they shape what we see, buy, and even think. The concern now is that the state may be seeking similar control, whether for security or balance.

As a result, India’s push to secure smartphones falls into a grey area. While national security concerns are legitimate, the scale and depth of the proposed controls raise fears of overreach. Without clear laws, transparency, and independent oversight, the boundary between protecting citizens and monitoring them could become uncomfortably thin.

Editorial Context & Insight

Original analysis and synthesis with multi-source verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with multiple primary sources to ensure depth, accuracy, and balanced perspective. All claims are fact-checked and verified before publication.

Editorial Team

Senior Editor

Dr. Elena Rodriguez

Specializes in Technology coverage

Quality Assurance

Associate Editor

Fact-checking and editorial standards compliance

Multi-source verification
Fact-checked
Expert analysis