EU’s AI Act shows the way; India must move to playbooks

EU AI Act
Europe’s phased AI Act ties duties to harm; India needs inventories, conformity checks and audit trails before algorithms decide loans, jobs or benefits.

Grok lavished praise for Hitler this month; Meta publicly refused to sign the EU’s new code of practice for general‑purpose AI; and Europe, worried about losing ground to the United States and China, rushed out detailed rules to force transparency and curb abuse. These incidents and responses frame a simple truth: the EU AI Act, for all its compromises, offers the clearest model yet for governing advanced models.

The Act does not outlaw AI; it classifies uses by risk. Prohibited systems — social scoring, emotion recognition in classrooms or workplaces, untargeted facial scraping — must disappear by February 2, 2025. General‑purpose AI (GPAI) obligations kick in on August 2, 2025. Most requirements for high‑risk systems arrive on August 2, 2026, with sectoral safety components getting an extra year, to August 2, 2027. This pacing reflects the EU’s preference for phased enforcement over shock therapy. 

READ | Overcoming land constraints for India’s solar energy future

EU AI Act — Transparency with teeth

Providers of GPAI models must disclose training data summaries, respect EU copyright law and share technical documentation with downstream developers. Those whose models cross a high‑impact threshold — defined partly by huge compute budgets — must submit to model evaluations, red‑teaming, and cybersecurity reporting. Penalties scale with risk: up to 7% of global turnover for prohibited uses, 3% for failures in high‑risk compliance, and 1% for supplying misleading information to regulators. This makes ‘trustworthy AI’ more than a slogan— it is a cost line in every boardroom budget.

Generative AI

The Act follows the GDPR playbook: location is irrelevant if EU residents are affected. Any provider that puts a system on the EU market, and any deployer whose output is used in the EU, falls under its scope. Compliance obligations cascade through the value chain: developers, importers, distributors, and deployers all carry duties — from record‑keeping to human oversight. This is regulation as supply‑chain discipline, and it will radiate beyond Europe as companies retool global practices to avoid duplicating compliance regimes. 

Critics say Europe regulates first and thinks about innovation later. Yet the law mandates regulatory sandboxes —controlled environments where firms, especially SMEs, can test systems under supervision — by August 2, 2026. Real‑world testing is permitted with safeguards and sunset clauses. The newly created European AI Office will coordinate standards, approve codes of practice, and update systemic‑risk criteria as models evolve. The signal is clear: experiment, but under watchful eyes. 

Rest of the world rely on patchwork

The United States is retreating from state moratoria and groping toward sectoral rules; the United Kingdom courts pro‑innovation light‑touch oversight; India has notified the Digital Personal Data Protection Act (DPDP) but key rules are still rolling out. Meanwhile, New Delhi has earmarked ₹10,300 crore under the IndiaAI Mission to build massive GPU capacity — hardware without a matching governance stack. A fragmented global regulatory field will hand de facto standard‑setting power to the most comprehensive regime—in this case, Brussels. 

India’s instinct has been to stay light on AI, fearing regulation will throttle innovation. The EU model shows a better path: regulate use cases, not technology labels; calibrate obligations to risk; and give industry safe harbours (codes of practice) that lower compliance costs. India should adopt three immediate steps: (1) mandate public inventories of AI systems used by government and regulated sectors; (2) require risk‑based conformity assessments before deployment in finance, health, mobility and welfare; and (3) insist on audit trails and explainability for any algorithm that decides eligibility, pricing or policing. 

Sector regulators RBI, IRDAI, SEBI, and TRAI must embed an “AI duty of care” into licensing conditions, just as prudential norms or consumer‑protection clauses are embedded today. Fundamental rights impact assessments should be standard for high‑risk deployments, and AI literacy should be mandatory for operators handling sensitive systems. Without such horizontal obligations, India will lurch from scandal to scandal, patching holes after the fact. 

The EU’s Act will not solve every problem—deepfakes will slip through labels; systemic risks may outpace thresholds; voluntary codes can become fig leaves. But it turns airy principles into enforceable duties. India’s opportunity is to adapt the architecture: a national AI office that convenes regulators, publishes risk taxonomies, issues sandbox approvals, and updates standards with industry and civil society at the table. The cost of inaction is not just consumer harm—it is strategic irrelevance in shaping the rules of the Fourth Industrial Revolution.