Deepfake regulation must not chill free speech

deepfake regulation
Deepfake regulation must distinguish fraud, harassment and satire instead of treating all synthetic content alike.

On 10 February 2026, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating India’s first dedicated framework for synthetically generated audio-visual information, or SGI. In common usage, this means deepfakes.

A deepfake is image, audio or video content generated or altered by artificial intelligence to resemble real persons, objects, places, entities or events, and to make false content appear authentic. Unlike conventional manipulation such as image editing or altered footage, deepfakes use AI systems trained through deep learning. These systems can recognise patterns, generate text, images, audio and video, and improve over time. The result is content that is realistic and difficult to identify.

READ | AI regulation: Why execution will decide outcomes

The risk is no longer speculative. In one study, participants believed an AI voice clone to be a human speaker in 80% of cases. Rob Greig, CIO of Arup, the UK-based firm that was recently hit by deepfake fraud, captured the problem well: “Audio and visual cues are very important to us as humans, and these technologies are playing on that.”

Deepfakes have existed since 2017. What changed after 2022 was the availability of low-cost and often free multimodal generative AI tools. The combination of scale, anonymity and weak accountability has increased the risk of fraud, harassment, impersonation and misinformation.

The 2026 IT Amendment Rules are therefore a necessary step. They shift India’s response from reactive takedown to ex-ante regulation. But the rules also leave important questions unresolved: the boundary between harmful deception and protected expression, the burden placed on platforms, and the technical limits of labelling and provenance.

Deepfake regulation and free speech

The rules exempt SGI that involves routine or good-faith creation or editing, provided it does not create false content or materially alter, distort or misrepresent the underlying information. This exception was added after stakeholder comments. It brings some nuance, but leaves too much to platform judgment and executive discretion.

The phrase “routine or good-faith” is subjective. The rules do not clearly define user intent, the threshold for material alteration, or the link between violation and harm. This matters most for satire, parody, political commentary and creative expression. In these areas, vague rules can produce a chilling effect on speech protected under Article 19(1)(a) of the Constitution.

AI regulation

The Supreme Court’s ruling in Shreya Singhal v Union of India remains relevant. Restrictions on speech cannot be vague, overbroad or dependent on excessive administrative discretion. They must also be tied to the grounds in Article 19(2), including sovereignty and integrity of India, security of the State, public order, decency or morality, defamation, contempt of court, or incitement to an offence.

READ | Grok episode: AI regulation in India risks stifling innovation

The distinction between SGI and user-created information is also weak. The rules draw from the 254th report of the Parliamentary Standing Committee on Home Affairs. But SGI is not machine output alone. Human input shapes prompts, context, accuracy and creative direction. In that sense, SGI is also user-created. From a civil rights perspective, it should not be regulated more restrictively than conventionally edited audio-visual content merely because AI was used.

A better approach would be risk-based classification. Deepfakes that violate criminal law, such as offences under the Bharatiya Nyaya Sanhita, 2023, the Protection of Children from Sexual Offences Act, 2012, or the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act, 1989, require one level of response. Deepfakes used for financial fraud or market manipulation may require action under Reserve Bank of India or Securities and Exchange Board of India frameworks. Platform-level harms such as cyberbullying, harassment and misinformation require another standard. One regulatory bucket cannot cover all harms.

Platform liability and deepfake compliance

The rules require intermediaries to deploy technical measures to prevent users from creating deepfakes that violate existing law. Deepfake content that is not illegal must still be prominently labelled so that it can be immediately identified as SGI. It must also carry permanent metadata, to the extent technically feasible, to support traceability when content is re-shared or re-uploaded.

If an intermediary becomes aware of a labelling violation, it must act quickly. This may include disabling access to or removing the content, suspending or terminating the user’s account, identifying the user, disclosing identity to the complainant victim, and reporting any offence to the authorities.

The rules correctly recognise that generative AI creates new compliance problems. But they also expand the regulatory role of private technology companies. Platforms are being asked not only to host or transmit content, but also to detect, classify, label, remove, trace and report it.

The scope of covered intermediaries is broad. It includes any service that enables, permits or facilitates the creation, generation, modification, alteration, publication, transmission, sharing or dissemination of SGI. This captures the full range of generative AI services. That may be useful, but the rules do not distinguish between passive intermediaries that merely host or transmit content and active intermediaries that enable generation or modification.

That distinction matters. A platform that generates a synthetic video has more visibility and control than a messaging service through which the video is forwarded. A graded allocation of responsibility would be more workable. It should be based on visibility, technical control and the role played in the SGI chain.

India will also need interoperable technical standards. These must account for privacy, intellectual property rights and the risk that provenance systems become surveillance tools. Traceability cannot be treated only as an enforcement convenience.

Deepfake takedown timelines and over-censorship

The operational burden on platforms is substantial. The timeline for complying with takedown orders for illegal content has been reduced from 36 hours to three hours. Non-compliance threatens safe harbour protection.

This creates incentives for over-removal. India’s online content is vast, multilingual and context-heavy. Three hours may be enough for obvious illegality. It is not enough for careful review of satire, parody, political criticism, manipulated news clips or contested claims. When the cost of error is loss of safe harbour, platforms will prefer deletion to judgment.

The rules came into effect within 10 days of notification. Yet the infrastructure needed for compliance will take longer: detection systems, reviewer training, escalation protocols, appeal mechanisms, and multilingual moderation capacity. These cannot be built by notification alone.

Labelling and provenance also face technical limits. Generative AI is evolving faster than deepfake detection benchmarks. Watermarks can be removed, metadata can be stripped, and content can migrate across platforms. Permanent labelling is therefore not a settled solution. It is a compliance aspiration that will need constant testing.

MeitY should supplement the rules with sandbox initiatives. AIKosh can be used to support innovation in deepfake detection, watermarking and provenance tools. Regulation will work better if it is paired with public technical infrastructure rather than left entirely to private enforcement.

READ | AI regulation: China gets it right with hard rules

India’s deepfake rules need institutional support

Deepfakes can harm individuals, markets and democracy. They can target vulnerable groups, enable financial fraud, invade privacy, damage reputations, disrupt elections and weaken trust in evidence itself. The state is right to act.

But AI is now part of the digital economy. India needs predictable regulation if it wants investment, innovation and responsible deployment. The 2026 IT Amendment Rules are a start, not a complete framework.

They should be supported by amendments to related laws, including the Digital Personal Data Protection Act, the Consumer Protection Act, 2019, intellectual property laws, and provisions of the Information Technology Act, 2000 beyond intermediary liability. Deepfake harms do not fit neatly within platform regulation alone.

The government should also invest in public awareness. Citizens must know how to identify deepfakes, report them, preserve evidence and seek remedies. A compliance-heavy regime will not be enough. India’s deepfake response must combine legal clarity, technical capacity, platform accountability and an informed public.

Ananaya Agrawal is a lawyer at Cyril Amarchand Mangaldas.

READ | AI regulation and India’s blueprint for ethical innovation