Historically, global technology providers don’t customise as per domestic requirements of a country. It is in this context that India has a lot of opportunity in the need for customisation for language, local services, local data context, as the global models for Artificial Intelligence (AI) are often not providing that, Susan Athey, Professor of Economics of Technology, Stanford Graduate School of Business said.
In an interaction with Aanchal Magazine on the sidelines of Delhi School of Economics’ workshop on AI, jobs and growth, Athey also said that any law for usage of copyright materials by AI apps needs to strike a balance as these startups don’t have a lot of revenue but there is also a need to pay news organisations.
“…it has to balance the ability of startups to get started, when they don’t have a lot of revenue to be able to use data, with the fact that things like the news industry need to get paid or they won’t produce news. So news involves research and development every day. It involves innovation every day. You are doing reporting, that’s innovation. And so your innovation is important. Just like the AI company’s innovation is also important and those have to be balanced. Edited excerpts: The discussions on AI across the world often feature the word disruption. For a country like India, which is an emerging economy and is still in a nascent stage in the AI ecosystem, there are concerns about its potential to disrupt the labour market. The big concern is that it may result in loss of jobs. Where do you see India featuring in the entire global AI ecosystem?
On this trip, I became more optimistic about India’s ability to participate. It’s a large country and we’ve seen historically that global technology providers don’t customise enough. And there’s so much opportunity and need for customisation in India for language, for local services, local data context, and the global models are often not providing that. So that creates an opportunity for India.
The second point is that for AI, it’s not just a consumer product, it’s not just ChatGPT and the web browser. It’s a product that serves the government, the defence sector, and businesses. With such a diversity of customers, there’s more room for a diversity of suppliers. And if the global companies are caught up in a race to the top in the general purpose thing, they don’t have the time and resources to serve all of these customers in India, then that creates room for the local providers to come in. So, I think for many countries, there’s room for the application layer to be domestic because the application layer has to do implementation. Like AI is not useful unless it’s actually solving the problem to the finish line and to solve the problem to the finish line, you need to work with the implementing partners. Like if you’re delivering it in healthcare, you have to work with the hospitals, you have to work with the hospital data. So some products you can just adopt off the web and then maybe the US company serves that, but the beauty of AI products is that they can be customised and specialised directly for the customer and that will be provided by local players.
In addition, a big part of the customer demand is government-led, which gives an opportunity to promote domestic companies. It’s so existential, you’re using AI in infrastructure in payments and so on, there’ll be a need for domestic. So, I think there’s a lot of opportunity for India to participate and then once it’s participating, you develop expertise and capability.
Do developing economies face the risk of becoming net importers of AI and how do you balance this with growth and sovereignty?
I’m very concerned about that. I recently wrote some economics papers, where I argued that the right macroeconomic model should be one where your AI is an import and there may be markups on that. That’s a simple proxy for the fact that you might not get the quality you want. So, I think there are many issues. One concern is that if smaller countries impose complicated legislation or regulation, that might help them in the short term but they might not get as many technology providers. So, countries have to recognise how important it is to have multiple vendors because you don’t want to scare away the competition. In the long run, if the economy becomes dependent on one foreign company, you don’t have any leverage. I do think, as I mentioned earlier, that AI has no value unless it’s implemented.
To reach its potential, it needs to interact with payment systems, health and other things, where countries control the data, they control access. So those are leverage points. It’s unlike just a service that you get in a web browser. The foreign vendors are going to realise that they need to co-operate because they’re key complements to the technology that are controlled domestically.
From a specific, individual country’s perspective, it’s natural to be concerned about sovereignty. If you are going to use it in mission critical systems, it can’t be something that can just be turned off. So, if you don’t have some local control then you can’t use it in payment systems or infrastructure or electricity or any place else. If you’re going to integrate it into essential services and infrastructure, then you need some local control. Or else you’re taking a risk. That’s a really tough thing because some small countries won’t have options, but if a country is big enough, it makes sense for them to have a strategy for local control of something, whether it’s inference for open models or something else, where they know they can control, and it can’t be turned off. Or the price can’t be raised, they can’t be exploited.
Can stronger privacy rules coexist with innovation in AI and digital services?
I think they can and it just needs to be done carefully. We’ve had a hard time getting the regulation right in the past. And our regulators should be very careful about capture from powerful incumbents because privacy regulation can favour incumbents because they already have the data internally.
India has come out with a draft working paper on copyright and AI data. It recommends a blanket licensing regime and AI developers will have to pay royalty for using copyrighted works…
I’m not familiar with the details of the law, but I would say that it has to balance the ability of startups to get started, when they don’t have a lot of revenue to be able to use data, with the fact that things like the news industry need to get paid or they won’t produce news. So news involves research and development every day. It involves innovation every day. You are doing reporting, that’s innovation. And so your innovation is important. Just like the AI company’s innovation is also important and those have to be balanced.
How disruptive is China’s AI model? This year we saw the DeepSeek challenge the US dominance over AI.
One thing that’s really important about DeepSeek is it shuts down a kind of silly debate, which is the idea that it was possible to keep this technology secret in some way. And, as soon as DeepSeek came out, that debate ended, which is great. Open models are really important because you can download them and keep them forever and so nobody can raise the price on you later. That’s really useful and they can be customised and fine-tuned with your local language and so on. I personally think that open source is really important from a competition perspective and from a sovereignty perspective.
Now, you don’t have to create the open model yourself. Because once you have it, you can keep it. But if you want to ensure in the future that you’ll have the latest and greatest technology, that’s where the real question is. Because it doesn’t matter if it’s a Chinese model or a US model, you can take it and customise it and keep it forever and you continue to use it. But you may not get the latest, the next thing, if you’ll be dependent on them for having the next thing. There are many use cases that have simple needs and the existing models are already good enough. They don’t need to be any better than they are today.
