Who owns the offline arts, like textile art or truck art? Imagine an AI system trains on a particular picture taken by a random photographer. The photographer might own the picture because he captured it, but does he also own the ‘art’ that he photographed? Irrespective of the answer, more importantly, can he claim royalties from the proposed AI-copyright body when AI systems use “his picture” for training their Large Language Models (LLMs)? This remains a legal grey area, especially when the captured cultural or traditional artwork may not legally belong to the photographer.
Exploring this idea, one of the speakers at the discussion organised by Software Freedom Law Centre, India (SFLC) at IIIT Hyderabad said, “First, how will you identify who’s free to work with you? Like, we’re talking about how royalties will be calculated, or it may come down to, like, Rs 2 per person or something. But you’re forgetting that creative work in India is [also] made by societies that do not have online representation.”
The speaker was referring to the royalties that original creators can claim from the Copyright Royalties Collective for AI Training (CRCAT), a body proposed to license data and distribute royalties as compensation for data used by developers for AI-related purposes, such as training their Large Language Models (LLMs) by the Department for Promotion of Industry and Internal Trade’s (DPIIT) working paper.
On 31st January, IIIT Hyderabad organised this discussion under the Chatham House Rules. This news report will include only the information shared during the discussion, without identifying the speakers or attendees.
How would they identify the ‘original’ creator?
While there is already an issue of particular attribution in the traditional arts, to illustrate how the proposed body and process could leave a significant portion of society, the speaker drew parallels with popular artists who earn a living through royalties, such as Arjit Singh, who recently quit playback singing. The speaker argued that royalty systems currently favour well-known artists, noting, “If you are that famous, if you are in that position of power, the system works for you.”
For context, under the proposed CRCAT process, collective management organisations (CMOs) would be responsible for distributing royalties to the original rights holders whose works AI data trainers list as having been used for AI training.
“The creator – whose village or identity remains unknown – will never have access to this money. The AI has been trained on their work; they get no remuneration or representation. Where is all of this money? It sits in the CRCAT’s welfare fund, supposedly for 3 years until it is claimed. If it’s not claimed, then it goes into this government scheme,” she commented.
“So essentially, you’re first ignoring this large part of the Indian creative sector whilst also allowing AI companies to build up,” a speaker at the discussion said.
The Limits of Fair Dealing in AI Training
Returning to the photographer’s example, one could argue that the photograph constitutes a derivative work and is eligible for its own copyright. However, the question of whether its use in AI training falls under the fair dealing provision of the Copyright Act remains contested. In India, this provision allows limited use of copyrighted works without permission for purposes such as private study, criticism, review, or news reporting. Several AI companies invoke fair dealing as their defence, arguing that their models are merely “learning” from data rather than copying or reproducing it entirely.
Referring to such an argument, one of the speakers questioned, “When we really talk about fair usage, by whom? Are we saying the model is using it fairly through learning, or are we saying OpenAI or other companies are using it fairly? Is it an individual that we are applying fair usage to, or is it a company that we are applying fair usage to? Why are we treating AI as a human?”
“It has not been proven that AI is human; we are simply saying it is learning, and they are trying to escape paying out by simply saying that my model is ‘learning’. We have assumed that just because you know advertising dollars have gone into it,” he said.
Explaining further, he said, “These [data ingested by AI systems] are there in weights; enough research has gone into it and proves that they just store these weights; the knowledge is compressed into weights, and then it is regurgitated in a different format.” Weights in AI refer to the numerical values that determine the strength and direction of connections between neurons in artificial neural networks.
‘Small Publications Need a Central Body to Protect Them’
AI companies may argue fair dealing when original copyright owners contest their use, as in the OpenAI vs. ANI case in India and elsewhere. Speakers also discussed how small publications often lack the resources to challenge big tech.
“ANI can fight a case against OpenAI because it has the resources to do so. But we will not be able to fight such a battle with OpenAI, Publicity, or any other platform, right? So, in that sense, I think it makes sense to have a central body that governs this for us and handles this part of the job. It’s a good idea to have the government take care of it at a central level,” said another speaker.
Additionally, without directly mentioning the partnership deals AI companies have with news publications, another speaker wondered whether AI companies are using news organisations as potential proxies for data collection.
He further cautioned media publication houses, saying, “I think there is a threat looming over us, and we have to find a way to monetise our content in a fair way that takes care of everyone, because large tech companies are just soaking up everything.” They are vacuuming up everything we have created over the years.”
The Philosophical Dilemmas of FOSS and Forced Royalties
Addressing Free and Open-Source (FOSS) philosophy, which licenses software under terms that grant users the rights to “run the program for any purpose, to study it, to modify it, and to redistribute it in original or modified form”, she said that the no opt-out mechanism for AI training pushes the FOSS community into an existential crisis because it forces them to claim compensation.
“I [FOSS Developer] didn’t want to make money off this, but now you’re forcing me to claim my remuneration. Where does that leave us [FOSS community], philosophically, as a community? It sort of negates the history of how we’ve come to this,” she highlighted the dilemmas that the FOSS developers face.
‘What Problem Is the Government Trying to Solve?’
“Is it [working paper] a solution to what? The problem has not been clearly defined by DPIIT or the committee. Copyright and AI are issues, but why are you [govt] balancing them? Why are you balancing these two things? Like, what is the connection between copyright law and AI regulation? It hasn’t been elucidated; there’s no evidence, and there is no data. The problem statement is missing,” pointed out another speaker.
Pointing out the potential disproportionate beneficiaries of the DPIIT’s draft working paper, i.e., AI companies, the speaker commented, “AI and tech growth is not necessarily equated to Gen AI growth, and vice versa.”
“I’ll be very blunt. I think the committee report is quite impractical and borderline nonsensical in terms of its, even if you take it all as it is, it’s not, you can’t implement it,” he concluded.
NOTE: The last date to submit the comments to the Ministry of Commerce and Industry is February 6th, 2026. You can email your comments to [email protected].
Curated by James Chen











