Our AI futures cannot be things we merely inherit. They must be things we collectively make (PTI Photo)
In the life cycle of all digital technologies, there comes a moment when it feels like the tech is everything, everywhere, all at once. It is an interesting moment to witness: One minute, it is still an emerging technology, a technology of the future, something we are still exploring and trying to understand. The next minute, it is in everything, everyone is using it, and we can’t remember a time when our lives were not completely bound and intertwined with this technology.
For those of us who have lived through digital emergences and emergencies, this should be déjà vu. We have been here before: With the rise of the World Wide Web, the spread of discussion boards and chat rooms, the proliferation of search engines, the flurry of social media platforms, the platformisation of our lives, the datafication of our everyday experiences, and now, here again, with Artificial Intelligence (AI). And for many of us in the Global South, we have also seen how these technologies are often imagined elsewhere and imported into our futures, scripted through decisions made in boardrooms and policy circles far away from where their impact will be felt most deeply.
It has been fun, in a poke-your-finger-in-a-socket kind of way, to look at everything around us and try to remember when it was not AI. It feels like a difficult exercise, because AI technologies have long been in the making. Automated decision-making, predictive systems, assistive technologies, and the large-scale data crunching that drives algorithmic curation of our online lives have been here for years. However, ever since Generative AI applications, which consolidate all of those information-processing and cognitive practices into one bundle, were unleashed upon the world, it feels like everything has always been AI. It feels like there is no future except the one charted for us by a handful of global tech companies who define freedom as the freedom to choose from what they have to sell.
As we march boldly into the new year, there is a certain resignation that Generative AI (and the relentless quest for General AI) will replace human labour, jobs, functions, creativity, connections, and conversations.
Twenty per cent of all YouTube videos are now AI-generated. Teachers complain that students don’t write, they prompt; students complain that teachers don’t grade; they just run assignments through LLMs.
Companies worry they don’t have enough skilled people to stay ahead of the AI curve; people worry their jobs will be replaced by machines.
Governments despair at what weaponised AI is doing to democratic systems and public trust; citizens are losing faith in state information and creating new structures of governance.
Even when spectacular breakthroughs in science, medicine, and research emerge from AI, the environmental costs, misinformation weaponisation, erosion of care systems, and the degradation of planetary ecologies make it difficult to just celebrate them. The bells that toll for AI, toll for you and me. It feels like the world is heading into spiralling despair, where only a handful will thrive, and the most vulnerable will take the worst brunt.
At the symbolic birth of a new year, it is customary to offer hope. But digital hope — promises of egalitarianism, equality, freedom, care, equity, belonging, and coming together — has proven fickle over the past years. So instead, I offer something that might take us further than hope: A challenge.
Let us refuse to accept these scripted AI futures as inevitable. Let us remember, collectively, that AI technologies might be unsettling, but they are not settled. They are still in the making. Let us understand that the AI we are offered is not the “natural” state of the technology, but merely one expression of it, shaped by extractive, profit-driven oligarchies that control its development. Our task is to realise that other AIs are possible: technologies with different applications, intentions, and futures, especially if we decentralise ownership and build digital commons that privilege community data trusts, cooperative infrastructures, public-interest platforms, and spaces where marginalised communities have authority, not just access. We can invite new stakeholders to imagine and shape these technologies differently.
The writer is professor of Global Media at the Chinese University of Hong Kong and faculty associate at the Berkman Klein Centre for Internet and Society, Harvard University
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
The Indian Express