AI regulation could hurt could hurt India’s innovation edge: India is once again debating whether emerging technologies need a new regulatory statute. The immediate trigger is the misuse of artificial intelligence tools to generate obscene, non-consensual images of women on social media platforms. The response from the ministry of electronics and information technology (MEITY) seeking explanations from platforms and signalling the possibility of a new AI-specific law reflects genuine concern. The harm is real. The outrage is justified.
But policy must begin where diagnosis is accurate. India already has a dense web of technology, criminal, and intermediary laws that address precisely such misuse. The problem lies not in legislative absence, but in uneven enforcement. A premature attempt to regulate AI as a technology risks constraining innovation in a country that is still building depth in advanced digital systems.
READ I AI copyright challenge: Why India must tread carefully
Grok episode a compliance failure
The recent notice sent by the IT ministry to X over the misuse of its Grok AI system makes this distinction clear. The ministry cited violations of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and provisions of the Bharatiya Nagarik Suraksha Sanhita, 2023, dealing with obscene and sexually explicit content. These are not legal blind spots. They are settled obligations.
What failed was not legal design but platform discipline. Guardrails existed. They were not applied with seriousness. Prompts were processed without restraint. Outputs were displayed without review. Complaint mechanisms moved too slowly for harm that spreads instantly. Treating this as evidence that “AI needs regulation” misreads the failure. The harder task is to compel compliance through audits, penalties, and the credible threat of losing safe-harbour protection.
READ I AI regulation: China gets it right with hard rules
Tech-specific regulation risks freezing innovation
Artificial intelligence is not a discrete sector that can be ring-fenced by statute. It is a general-purpose capability, embedded across healthcare diagnostics, climate modelling, financial risk assessment, logistics optimisation, and language translation. Regulating it through ex ante permissions or broad content controls would raise costs across the economy.
India’s AI ecosystem remains thin compared to global leaders. Domestic startups operate with limited capital, small legal teams, and short runways. Compliance-heavy regulation would not slow global technology firms. It would slow Indian ones. Large incumbents absorb friction. Early-stage firms do not survive it.
India’s software services boom offers a reminder. It was enabled not by early regulation but by clarity, low entry barriers, and policy restraint. AI sits at a similar inflection point. Over-regulation now would lock India into the role of downstream adopter rather than upstream innovator.
READ I AI regulation and India’s blueprint for ethical innovation
AI regulation divergence offers a lesson
International approaches to AI regulation are pulling in different directions. The European Union’s AI Act reflects a precautionary instinct, with risk classifications, compliance obligations, and enforcement timelines that assume deep regulatory capacity. Several European startups have already warned that the framework favours scale over experimentation.
The United States has moved the other way. The Trump administration’s December 2025 executive order dismantled state-level AI rules in favour of federal primacy and post-hoc enforcement. The bet is explicit: punish harm when it occurs, but do not slow deployment in advance.
India should copy neither model. The EU approach presumes enforcement depth India does not yet possess. The US approach relies on litigation capacity that India lacks at scale. What India does have is a functioning intermediary liability regime and criminal law architecture that already covers most forms of AI misuse—if enforced with intent.
Indian laws already address AI misuse
The claim that AI operates in a legal vacuum does not survive scrutiny. Non-consensual deepfakes violate criminal law. Obscene content breaches statutory prohibitions. Platform liability is addressed under the IT Rules. Consumer harm is covered by the Consumer Protection Act. None of these provisions depend on whether harm is caused by AI or by earlier software tools.
The IT Rules already require intermediaries to deploy reasonable safeguards, respond to complaints within defined timelines, and maintain traceability in serious cases. If these requirements are proving inadequate, the answer lies in tightening enforcement standards, not in drafting an entirely new AI statute that will take years to interpret and litigate.
India has seen this pattern before. Years spent debating comprehensive data protection legislation did little to improve day-to-day privacy enforcement. Legislative ambition ran ahead of administrative capacity.
Enforcement matters more than legislative ambition
AI regulation without enforcement is performance, not governance. India’s constraint is not legal authority but institutional bandwidth. Algorithmic audits, real-time content moderation oversight, and swift penalties require trained staff, technical tools, and coordination across agencies.
Targeted measures would deliver faster results. Mandatory third-party audits for high-risk AI deployments. Public disclosure of platform safety metrics. Escalation mechanisms for victims that operate in hours, not weeks. Financial penalties that bite. Withdrawal of safe-harbour protection for repeat failures. These levers already exist in law. They are simply not used consistently.
Over-regulation risks another unintended outcome. Compliance-heavy regimes often entrench large global firms while shrinking space for domestic challengers. If the goal is to discipline Big Tech, enforcement works better than new rulebooks.
Public anger over AI misuse is justified. Policy haste is not. India does not need to rush into technology-specific regulation that constrains innovation while leaving enforcement weak. It needs to apply existing law with seriousness, speed, and consequence.
The choice is not between control and chaos. It is between symbolic lawmaking and credible governance. For a country that wants to build, not merely consume, advanced technologies, the path forward is clear. Regulate harm. Enforce responsibility. Leave innovation room to breathe.

