AI regulation: How do you regulate a technology that is evolving by the day? This is now a central policy problem. Artificial intelligence has moved well beyond experimental systems. It is embedded in economic activity, administration and everyday digital life. Regulation has not kept pace.
India has begun work on a framework. The government has set up the Artificial Intelligence Governance and Economic Group (AIGEG) to design a unified legal structure for firms operating in this space, including developers of language models and chatbot systems.
READ | Grok episode: AI regulation in India risks stifling innovation
From fragmented rules to a unified framework
This marks a shift from the current patchwork of data protection rules, IT regulations and sectoral guidelines. The intent is to set rules before risks scale. Execution will be difficult.
The first obstacle is accountability. AI systems operate with autonomy and opacity. When an AI system generates harmful content, produces a flawed medical output or compromises cybersecurity, liability is unclear. The developer, deployer and user all sit in the chain. Fixing responsibility is not straightforward.
Countries in the OECD have relied on voluntary guidelines. Compliance depends on industry behaviour. Enforceability remains weak.
China has taken the opposite route. Its generative AI regulations mandate security reviews, algorithmic transparency and content controls. The state anchors governance. That model sits uneasily with India’s democratic and market structure.

Security risks and system-wide exposure
The urgency is rising. Advanced models can interact with legacy systems, identify vulnerabilities and execute tasks with limited human input. Risks are no longer theoretical. Economic disruption and national security concerns are real. These systems can probe digital defences at scale. This requires a coordinated, government-wide response.
The composition of AIGEG — policymakers, economists, technologists and industry — could help break institutional silos. Whether it delivers usable policy is an open question.
READ | AI regulation: China gets it right with hard rules
Regulatory architecture and state capacity
Designing rules is only one part of the problem. India has not settled the question of regulatory architecture. AI cuts across sectors already governed by bodies such as the Ministry of Electronics and Information Technology, the Data Protection Board of India, the Reserve Bank of India and the Securities and Exchange Board of India. These operate within defined mandates. AI systems do not.
The choice is between a centralised regulator, a networked model using existing institutions, or a hybrid. Each has costs. Fragmentation will dilute accountability. Centralisation will strain capacity. The constraint is capability. Effective oversight requires technical skills in model evaluation, algorithmic auditing and cybersecurity. These are limited within the state. Without institutional clarity and capacity, rules will remain notional.
Labour market impact and policy gaps
Regulation cannot ignore labour effects. In a labour-abundant economy, AI-driven automation will displace workers across sectors. Policy must address transition costs.
Advanced economies have linked AI strategy with social protection — unemployment insurance, reskilling systems and public investment in digital infrastructure. India cannot replicate this architecture at scale. It will need targeted interventions. Skilling programmes must align with sectors where AI adoption is accelerating — healthcare diagnostics, agriculture and adaptive learning.
READ | AI regulation and India’s blueprint for ethical innovation
Data governance as a core constraint
Data sits at the centre of AI systems. Questions of data quality, bias and privacy are unresolved. AI will strain India’s data protection regime.
Anonymised data does not eliminate bias if it reflects structural inequalities. Use of personal data for model training raises issues of consent and ownership. Aligning AI regulation with data governance will be difficult.
No template, only trade-offs
There is no ready template. India will have to design its framework around its economic priorities and institutional capacity. The formation of AIGEG is a starting point, not a solution.
The test will be implementation. AI is already reshaping production, services and governance. Policy choices made now will have long-term effects. Regulation must adapt as the technology evolves. Static law will not suffice.