Site icon Policy Circle

AI hype risks misguided policy choices for India

AI and productivity

Overblown promises of AI hype could push India into flawed policy making and wastage of resources.

The evolution of science and technology has reconstructed human social history, and the impact of technological change on society has been profound. This has been central to the narratives around  computers, genetic engineering, quantum technology and artificial intelligence (AI).  Hype over emerging technologies is a way to make reality, by the technology industry and the policy elite for the people. It is an attempt to wish technological change into existence by declaring that we are already in its midst. 

Narratives of technological inevitability, which pose technological change as a foregone conclusion for human future, constitute a form of power grab by the technology-industrial complex that attempts to preemptively close the debate. Such narratives and the idea of technocracy, in which governance is seen as an obstacle to innovation, routinely dominate conversations on AI. 

READLow CPI inflation figure masks rising living costs

For instance, when it is proclaimed by AI developers that this technology’s use in passing a bar examination is equivalent to AI replacing lawyers. Similar arguments are being made for AI replacing healthcare professionals and teachers if the technology is demonstrated to carry out a part of these professionals’ jobs.  The reason such analogies hype the promise of technology is because all these professions are much more than any particular aspect of the jobs, especially passing a standardised test which is most likely to be aced by an AI tool.  

But the human aspects of these jobs, the relational components and the skills to navigate and improvise complex social situations are still far from being encoded as straightforward data patterns that can be used to train AI models. In other words, complexities of human behaviour and interactions are hard to encode in AI-based technology.  Hype-based technological narratives are a deliberate framing move to push policy actions and commit public resources. 

AI policy in India: Hype and reality

Much of the policy decisions about AI in India seem to be driven by this narrative as we see policymakers talking ever so optimistically about the promise of the technology, pledging resources link? without much proof of concept. 

Such optimism about AI in India was corroborated in a recent survey of more than 48,000 people. It found that 76 percent Indians trust AI more than 46 percent of their global counterparts. This reveals a postcolonial belief in technology (and science) as the key driver of development, even as 60 percent people in emerging economies trust AI systems compared to only 40 percent in developed ones.   On the other hand, hype fuels paranoia such as AI posing an existential threat to humanity, or having the potential to replace human labour completely. 

A wide cross-section of academic voices, especially in critical social sciences, support this narrative. As a result, suggestions to reject the technology are advocated. Ironically, this position also buys into AI’s disproportionate promise by inflating its threats.   

Since the release of ChatGPT, there has been an avalanche of generative AI models even as this technology is getting better in creating human-like text. However, it is still riddled with flaws such as AI ‘hallucinations’ — when AI models make up facts such as academic references.  A 2024 study found that various AI chatbots made mistakes ranging between 30-90 percent of the time in getting the correct academic references. 

In courses that I teach on the interface of Science, Technology and Public Policy, I have witnessed students using generative AI tools to write papers with completely made-up references, citing authors or titles that don’t exist or putting authors with titles they have never written. 

But the bigger problem with generative AI for higher education, reflected in a June 2025 MIT study, is cognitive offloading – students unable to think and write on their own, having low threshold for reading and poor capacity to demonstrate grasping the content. 

There is a need for a more nuanced public and policy discourse on AI. What does AI even mean? Generative AI, a technology which generates texts, images or videos as output, is very different from predictive AI which predicts a future outcome based on data. 

Both these are very different from the more futuristic technology of Artificial General Intelligence, or  AI with human-level consciousness. Regulations will have to differ and respond to the kind of AI one is talking about. 

Job losses and local impact of AI adoption

Policymaking often suffers from technological presbyopia which means overestimating the short-term impacts of technology, and underestimating the long-term ones. 

Instead of buying into the technological hype, there is a need for a much more realistic assessment of technology.  

Some AI applications are relatively more harmful than others. For instance, facial recognition technology, especially in surveillance or criminal justice systems, carries a very heavy price, in terms of civil rights violations, than a relatively benign AI chatbot used for customer relations. 

Building adaptive governance

Dispelling hype-based narratives of AI can help focus on the immediate threats arising out of it than the speculative and existential ones. These immediate threats include systematic decline in students’ reading and writing abilities in universities, job losses and environmental and local threats to communities arising out of mushrooming AI data centres globally. 

Studies suggest that data centres can constitute up to 12 percent of energy demand by 2028 in the US alone. These data centres further put enormous pressure on local water supply, especially in water-stressed regions. In terms of job losses, a recent IIM-Ahmedabad survey found that 68 percent of Indian white-collar jobs are under threat of automation within five years. 

As India needs to create about 8 million jobs a year, regulations need to be put in place to require companies to draw a line where to stop automating, and choose labour in the age of AI. 

A logical conclusion of seeing beyond the utopian or dystopian visions of AI is to look at it as  “normal technology”, as argued lately by Princeton scholars Arvind Narayanan and Sayash Kapoor. 

In this view, AI, like other transformative technologies such as electricity, computers or the internet, will take decades if not centuries to have a substantial impact across different industries. It will also surely come across socio-technical frictions of political acceptability, economic viability and value acceptance.  

This kind of gradual diffusion of innovation requires incremental governance– gradual and adaptive development of policies in tandem with advancements in technology– and resilience-building in the economy. 

This approach stands in contrast to dramatic interventions, such as the mission to develop foundational AI models in a matter of a few months or big budget announcements for large-scale adoption of AI across various sectors of the economy, based on speculative views on AI.  

Sushant Kumar is Assistant Professor, Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana. Originally published under Creative Commons by 360info.

Exit mobile version