Site icon Policy Circle

IT Rules Amendment: 3-hour takedown rule tests platform moderation

IT Rules Amendment

IT Rules Amendment's takedown timeline may curb irreversible harms, but they also risk opaque censorship and higher entry barriers for smaller platforms.

IT Rules Amendment: The internet has always run faster than rulemaking. Generative AI has widened that gap into a regulatory chasm. Deepfakes, manipulated videos, and non-consensual intimate imagery now travel across platforms in minutes, while grievance systems move at the pace of paperwork. India’s latest amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 are an attempt to force platforms to match the speed of harm with the speed of response.

The centre has cut the takedown window for unlawful content from 36 hours to three hours after being notified. It has also set an even tighter deadline for non-consensual intimate imagery: two hours. The amendments were notified on February 10, 2026 and take effect on February 20, 2026.

READ | IT Rules 2025 test balance between safety and rights

This is one of India’s most aggressive regulatory interventions in online content governance. The signal is unambiguous. A 36-hour compliance clock, once defensible, is close to meaningless when the harm is front-loaded. By the time a platform responds, the content has already been copied, re-uploaded, mirrored, and pushed through messaging groups. The reputational and personal damage is often irreversible.

Compressing timelines is also an implicit admission of how deepfakes work as an event, not a post. They generate a burst of attention, and the burst does the damage. Late takedowns are not remedies. They are paperwork after the injury.

Synthetic media labelling and traceability

The amendments also push the system towards what amounts to traceability governance in the AI era. Platforms are required to enable labelling of synthetically generated or AI-generated material. Where feasible, they must embed persistent metadata or identifiers to help trace origin. This is aimed at services that host, distribute, or generate AI-driven content.

This matters because the regulatory problem is no longer only “harmful content”. It is epistemic confusion at scale. When synthetic audio or video convincingly mimics real people, users struggle to separate the authentic from the fabricated. Disclosure and technical provenance do not solve misinformation. But they can raise the cost of deception and create audit trails for investigation and enforcement.

READ | AI regulation: China gets it right with hard rules

Over the past year, political leaders, film actors, and ordinary citizens in India have been targeted by deepfake videos and manipulated images that travelled widely before being debunked. Detection tools remain at a disadvantage against rapidly improving generative systems.

IT Rules Amendment: Defining synthetically generated

The amendments narrow the definition of synthetically generated information to cover content that is artificially created or altered in a manner that appears real or authentic. At the same time, they carve out space for good-faith editing and legitimate accessibility uses. The rules stop short of treating routine digital practices as suspect by default.

That restraint is not cosmetic. Earlier regulatory instincts around rigid watermarking and heavy compliance burdens have repeatedly triggered industry pushback. The notified approach appears to have softened some of those edges, including on how labelling must be displayed, while still putting platforms on notice that synthetic media is now a compliance category.

Speed versus accuracy in multilingual moderation

The hardest question is not intent. It is implementation.

Large platforms operate at a scale where millions of posts are uploaded every hour across languages, formats, and jurisdictions. Three-hour takedowns will require a mix of automated detection and expanded human moderation. Even advanced AI tools struggle to distinguish satire, parody, political commentary, and malicious manipulation. India’s multilingual reality amplifies that error rate.

Speed compels caution. Caution can become over-removal. Over-removal can chill legitimate speech. The rules aim to prevent irreversible harm, but the same timelines can incentivise opaque moderation and defensive censorship, especially when “unlawful” content requires context and legal assessment.

Lessons from the Digital Services Act

This tension is not uniquely Indian. Europe’s Digital Services Act imposes strong obligations on platforms, but it is paired with transparency requirements, reporting, and structured oversight mechanisms designed to reduce arbitrary decision-making. India’s amendments will need similar guardrails if rapid takedowns are not to become a black box.

The credibility test will be procedural. Are users informed when content is removed? Is there a workable appeals path? Are orders and outcomes reported in a way that enables scrutiny? Without those disciplines, “speed” becomes a substitute for justification.

READ | India’s data centre ambitions hinge on availability of water, power

Compliance asymmetry and competition effects

There is also a market-structure problem hiding inside a speech-regulation debate. Global technology firms can deploy moderation infrastructure at scale. Smaller domestic intermediaries may not. Meeting a three-hour standard across categories will require engineering, staffing, and legal capacity that new entrants often lack.

If compliance costs become fixed costs, the rules can raise entry barriers and tilt the market towards incumbents. That is not a side issue in a country building a large digital ecosystem. Regulation should not unintentionally distort competition while trying to reduce harm.

Deepfakes create urgency beyond misinformation

Generative AI changes the risk profile. Deepfakes are not only misinformation. They enable fraud, extortion, identity abuse, and electoral manipulation. When synthetic content can impersonate real individuals with plausible voice and likeness, the harm spills into financial and political systems. Regulatory urgency is not hard to justify.

But urgency also tempts overreach. Accelerated takedown can widen executive discretion over online speech. India is not alone in grappling with how to regulate generative AI without stifling innovation. The question is whether India’s model can deliver speed without sacrificing accountability.

Outcomes will depend on the practical capacity of platforms, the quality of enforcement, and how courts interpret disputes that will inevitably arise once removals are measured in hours, not days.

READ | How a new America threatens states’ territorial sovereignty

Exit mobile version