There is an old rule in constitutional law. When the State acquires new powers, citizens must read the fine print. And then read the silences. The newly proposed amendments to the Information Technology Rules, now informally described as the IT Rules 2025, demand precisely that kind of close reading. They promise safety from deepfakes, harmful content, and misinformation. But they also expand the state’s discretion in ways that may outlast the technologies they seek to regulate.
The government’s justification for introducing changes under IT rules 2025 is understandable. Deepfakes have moved from parlour tricks to political weapons. False videos can destabilise elections, incite violence, or ruin reputations. No administration can ignore such risks. Yet, the challenge is to design deepfake regulation in India without choking the democratic oxygen of online debate.
The amendments under IT Rules 2025 introduce three pillars: mandatory detection and labelling of AI-generated content, enhanced intermediary liability, and swift takedown obligations for harmful content. These are weighty interventions. They must therefore be measured against Articles 14, 19(1)(a), and 21 of the Constitution, and the Supreme Court’s repeated insistence on proportionality, clarity, and due process.
READ I Gen Z politics: How social media is rewriting strategy
IT rules 2025 shifts the burden to platforms
The first question is simple. Does the State need these broad powers to fight deepfakes? Or does it merely want them?
Under the IT Rules 2025, platforms must deploy best-effort AI tools to detect synthetic media, label it, and remove misleading or harmful content within compressed timelines. What counts as best effort? What is misleading? The draft does not say. Vague mandates may be convenient for regulators, but they are a compliance minefield.
Intermediary liability was envisioned in Section 79 of the IT Act as a shield, not a trapdoor. Safe harbour survives only if platforms meet conditions set by the State. When the conditions are imprecise, the safe harbour becomes illusory. A platform advised by cautious lawyers will over-comply. A platform fearing criminal liability will take down more than necessary.
Is that the regulatory design we want? A system where the incentive is not to protect free speech but to avoid trouble?
The government seeks faster compliance to counter rapid viral spread. That is a legitimate aim. But speed without procedural safeguards is indistinguishable from coercion. The Supreme Court, in Shreya Singhal vs Union of India, struck down Section 66A because it criminalised speech using vague terms. The lesson from that judgment was unambiguous: when the consequence is censorship, ambiguity is unconstitutional.
The problem of state-determined truth
The rules also propose a new mechanism for flagging priority harms, enabling the government to direct platforms to downgrade or remove flagged content. In theory, this promotes safety. In practice, it permits the executive to decide what counts as harmful, misleading, or fake.
A constitutional democracy must distrust easy answers. Independent oversight is essential, but the amendments are silent on it. No appellate body. No judicial review built into the process. No transparency reporting mandate that reveals how many takedown orders were issued, against whom, and on what grounds.
Without these safeguards, online free speech becomes a privilege, not a right.
Deepfakes do demand regulation — but what kind? There is no doubt that deepfakes are a serious threat. Women have been disproportionately targeted through non-consensual synthetic videos. Political deepfakes can sway public opinion in minutes. The harms are real.
But is the solution a wide, discretionary net? A proportional regime would begin with clear definitions. It would differentiate parody from deceit. It would impose criminal liability only for malicious, damaging deepfakes—not for satire or political commentary. The EU and Singapore have taken this route. India’s draft rules, however, rely on executive determinations that may vary with political seasons.
If the test of a good law is that it protects the critic as much as the citizen, the amendments fall short.
IT Rules 2025: Privacy and surveillance risks
The obligation to detect deepfakes is technologically complex. Platforms may need to scan all user-generated content using automated tools. That means expanded surveillance of private communication. The draft rules do not clarify limits on such scanning, retention periods, or audit norms.
Does the Constitution permit mass content-scraping in the name of harm prevention?
Article 21 protects privacy. Any intrusion must be proportionate, necessary, and backed by law. The draft rules meet none of these standards. Worse, they allow the government to recommend specific technologies for detection. If these tools are inaccurate or biased—as many AI detectors are—the user pays the price.
A federal question with global implications
There is also a federal dimension. Digital policing is effectively centralised. States have limited visibility into takedown orders that affect public discourse within their territory. The cooperative federalism that the Constitution envisages is absent here.
Then there is the global consequence. India is a digital market of a billion users. When it imposes strict compliance burdens, platforms respond uniformly worldwide. India’s regulatory decisions can therefore shape global norms on synthetic media, privacy, and free expression. A heavy-handed design may export unintended restrictions far beyond our borders.
Due process: The missing principle
Every coercive action under administrative law requires due process. The amendments rely on executive instructions, not adjudication. They are silent on notice to the user, right to be heard, reasons for takedown, or appeal procedures.
If a journalist’s post is removed because it is allegedly “harmful”, what remedy does she have? Must she litigate for months while the news cycle moves on?
Due process is not a luxury. It is the dividing line between regulation and arbitrariness.
Balancing innovation and rights
Deepfake harms must be addressed. Users’ safety is not optional. But constitutional rights are not optional either. The government must resist framing the problem as a false binary: safety versus liberty. A mature regulatory framework ensures both.
What would such a framework look like? Clear statutory definitions. Independent oversight. Sunset clauses for extraordinary powers. Transparency norms for both the State and platforms. Narrowly tailored obligations for deepfake detection. A safe harbour that is meaningful and predictable. User-rights protections, including notice, explanation, and appeal.
Most importantly, Parliament—not delegated legislation—should decide the contours of digital censorship. The IT Act, enacted in 2000, was not drafted for the age of generative AI. Delegating expansive censorship powers to the executive through rules risks converting a narrow gate of regulation into a wide net of control.
A democracy cannot fear its citizens. It must trust them. It must permit dissent, error, satire, exaggeration, and even foolishness. Those are integral to free speech. Deepfake harms should be punished with precision, not vague mandates. Platforms should be held accountable, but without converting them into private censors.
The Constitution offers a simple test: Is the restriction reasonable? The Supreme Court offers another: Is it proportionate? The proposed IT Rules 2025 will pass neither test unless they are narrowed, clarified, and subjected to independent oversight.
India needs a rights-respecting digital regime that confronts deepfake harms without compromising the foundational value of free expression. Protect citizens, yes. Protect the State from criticism, no. A regulatory framework that draws this line sharply will strengthen democracy—not weaken it.

