India could lead a third way in AI governance

AI governance regulation India
India is facing complex choices in navigating China’s state-led control and the US’ chaotic and fragmented AI governance regulation.

AI governance in India: In what has become a geo-technological rat race where every country with skilled human capital and resources is seeking to expand its technological potential, the fast-evolving landscape of Artificial Intelligence (AI) is an inevitable response. While this rat race comes with its own fault lines such as ethical dilemmas, regulatory gaps, widening inequalities and the risk of misuse or unintended consequences, countries such as China, US and India seem to have entered an unprecedented AI arms race.  

The UN Secretary General has warned that the AI threat is on par with the nuclear war, expressing his concern for the potential of this technology in deciding humanity’s fate. This helps build a shared, ethical global approach backed by international partners in order to regulate irresponsible use of AI through multilateral cooperation.

The Trump administration has given unprecedented access of government data sets to high-tech companies and researchers across the world are generating complex new AI models without thinking through their potential effects. 

READGrok episode: AI regulation in India risks stifling innovation

This is happening in the backdrop of an extremely fragmented AI regulation in the US which continues to rely on state laws and internal policing of companies as against China’s tightly centralised, state-led regulatory model, and the EU’s emerging but still contested risk-based legal framework.

Such regulations raise multiple questions on how big tech giants will balance moral and ethical standards and regulate risks amidst pressing private competition.  

Even as China and the US represent two polar opposites on AI governance, India’s position, which appears to be evolving gradually, could be based on competitive innovation and credible and lasting safeguards.

India has shown a tendency to impose reactive bans rather than systemic oversight, reflecting the absence of a consistent framework to regulate AI. While it has a choice between two approaches — China’s control and America’s chaos — it must pursue its own path.  

READAI copyright challenge: Why India must tread carefully

AI governance environment

China is a close global competitor for the US with its open-source AI models accounting for 30 percent of global use. Amid this technology race, China has been slowly expanding its bureaucratic know-how and regulatory capacity, creating a model in which the state is the primary risk-taker.  

While the US now aims to create a national policy framework for AI, the Chinese have developed the logistical knack and the infrastructural capacity required to implement the policy at the national level. 

On the other hand, the US might struggle to adopt this as it barely staggers to the regulatory shore amid its 50-state discordant policy.

China has adopted a state-led, preventive and compliance-first approach, contrasting it with the US’ regulate-later approach, which it had been practicing before the executive order for national policy framework was announced. 

In China, AI systems are subject to pre-deployment scrutiny, algorithmic registration, traceability requirements and clear lines of liability. They bar excessive price discriminationrequire labelling of synthetically generated content and need outputs to be true and accurate. Together, these measures embed AI governance directly into China’s administrative priorities, ensuring that systems are controlled, attributable, and aligned with state-defined objectives. 

This is in line with China’s broader industrial policy, aiming to scale innovation while maintaining political stability and social control. 

READAI regulation: China gets it right with hard rules

EU as a norm-setter

The European Union has sought to position itself as a global norm-setter through a risk-basedrights-centric framework.  The EU’s AI Act does not seek blanket control but categorises AI applications by risk, imposing strict obligations only on high-risk uses such as biometric surveillance, credit scoring or welfare allocation.  

This model reflects Europe’s emphasis on human rights, data protection and legal accountability, even at the cost of slower innovation. The US model, on the other hand, is based on a market-driven approach which relies on voluntary standards and prioritises innovation over preventive regulation.  

Complementing the EU AI Act, the AI Liability Directive in EU clarifies accountability for harm caused by AI, and the AI Continent Action Plan promotes innovation, ethics and cross-border collaboration.  

This framework seeks to balance innovation with regulation by embedding market development within a clear legal architecture rather than leaving it to unhindered competition. 

This ensures that only high-risk AI applications face stringent compliance, allowing low- and minimal-risk systems to innovate with limited regulatory burden.  

AI governance in Global South

The Global South is in a precarious position in the AI era, largely confined to being norm-takers rather than norm-makers. Lacking the resources to shape global standards, many developing nations such as Brazil, South Africa, Indonesia and Kenya rely on fragmented rules or imported regulatory templates.  

This often results in a patchwork of data protection laws and ethical guidelines, insufficient for the scale of modern AI. They risk becoming unregulated testing grounds, absorbing harms without shaping global standards.  

India continues to rely on the IT Act of 2000 and the Digital Personal Data Protection (DPDP) Rules of 2025 as well as sectoral advisories. The existing IT Act (2000) and DPDP Act (2023) are insufficient because they focus narrowly on data privacy and the liability of online intermediaries for user-generated content.  

They do not address core Generative AI risks like model safety, algorithmic bias, lack of explainability or the economic impact of autonomous decision-making systems. 

Choosing between China’s control-heavy model and the US’ deregulated, market-first approach, is analytically flawed and strategically limiting. 

China’s framework, while effective in enforcing compliance, risks excessive centralisation of power, potentially stifling independent research, dissent and private innovation.  

The US model, conversely, has enabled rapid technological advancement but at the cost of platform monopolisation, weak accountability and the global export of algorithmic harms, from misinformation to labour displacement. 

For India, a large, diverse democracy with uneven institutional capacity, neither extreme is sustainable.  

Instead, India must pursue a third path of regulated openness, combining innovation with credible safeguards. This would entail a risk-tiered regulatory framework, drawing from the EU model but adapted to Indian realities, where stringent ex-ante rules apply only to high-risk domains such as elections, biometric surveillance and credit or welfare allocation.  

Low-risk applications should be lightly regulated to encourage experimentation. Crucially, India should prioritise public transparency and explainability obligations, rather than content control, to safeguard democratic accountability without chilling innovation.  

India’s true strategic advantage lies in its Digital Public Infrastructure (DPI). Platforms such as Aadhaar, UPI and the overall India Stack do not merely digitise services; they provide a scaled, inclusive testbed for AI systems that is globally unparalleled.  

The commitment to #AIforAll, rooted in leveraging DPIs for public good, shifts the focus from purely commercial innovation or state control to population-scale, inclusive deployment. This unique capability allows India to champion a public good-oriented global AI framework, thereby demonstrating how AI can serve developmental priorities and attract support from the Global South. 

AI regulation is fundamentally about who controls digital power over data, algorithms, markets and citizens. If India delays regulation in the name of innovation, it risks ceding control to foreign platforms and external rule-makers.  

China’s push for norm-setting leadership through initiatives like the World Artificial Intelligence Cooperation Organization (WAICO) places India at a strategic crossroads by forcing it to choose between remaining a passive rule-taker in emerging global AI standards or actively shaping norms as a growingly technologically ambitious leader across the globe. India should neither reflexively oppose such efforts nor uncritically align with them; instead, it must engage selectively, recognising that global AI governance is still fluid. 

Participation at such forums allows India to influence standards on transparency, safety, and accountability, especially for the Global South, where Chinese and Western AI systems are rapidly being deployed. The most important technology of any lifetime presents grave risks for humanity if it goes unregulated. There is now a pressing need to engage but not cede to global initiative to regulate AI.

Deepanshu Mohan is Professor of Economics and Dean, IDEAS, Office of Inter-Disciplinary Studies, O.P. Jindal Global University, Sonipat, Haryana. Saksham Raj, an undergraduate student at the Jindal Global Law School, O.P. Jindal Global University, Sonipat, Haryana, contributed with research for this article. Originally published under Creative Commons by 360info™

READAI regulation and India’s blueprint for ethical innovation