Site icon Policy Circle

India’s cyber laws face deepfake reckoning

cyber laws

A parliamentary panel seeks tougher cyber laws on deepfakes, OTT platforms, and social media accountability.

Late last year, a video of a well-known actor went viral before it was revealed to be a deepfake. By then, the reputational damage was done. Such incidents highlight the inadequacy of India’s cyber laws, still rooted in a two-decade-old framework, to confront the disruptive power of artificial intelligence.

In August 2025, the Parliamentary Standing Committee on Home Affairs tabled its 254th Report on Cyber Crime. It flagged the rising risks of financial fraud, deepfakes, and cyber trafficking, urging sweeping reforms to India’s digital legal framework. The committee pressed for stronger laws, tighter enforcement, and new regulatory mechanisms to tackle threats magnified by AI and cryptocurrency.

READ | Democracies risk sliding into surveillance states

Closing legal gaps

The report identified urgent gaps in current legislation: deepfakes, misuse of AI, and the largely self-regulated space of OTT platforms. Its proposals range from watermarking digital content to the creation of post-release review panels for streaming shows, along with tougher penalties for social media platforms that fail to curb unlawful content.

When the Information Technology Act was passed in 2000, neither AI-generated pornography nor unfiltered streaming platforms were on the radar. Today, both shape popular culture and political debate. Deepfakes spread misinformation, threaten reputations, and fuel cybercrime. OTT platforms influence public discourse but also raise concerns about content accessible to minors. The committee’s call is a belated attempt to catch up with technological change.

The regulatory mix

The committee’s suggestions combine technical and regulatory measures. Watermarking could create a digital trail to distinguish genuine content from manipulated media. A review panel for OTT platforms would scrutinise shows deemed inappropriate for minors. Social media platforms could face periodic reviews of their “safe harbour” immunity and penalties for failing to act against unlawful content, including fines or suspension.

For content creators, these proposals cut both ways. Watermarking may build audience trust, but review panels evoke memories of the censor board. What a filmmaker sees as satire could be labelled offensive by a committee. The thin line between protecting minors and policing ideas risks being crossed.

Industry concerns

Compliance will be costly. Watermarking systems, age verification, and stricter content moderation demand significant investment. Smaller OTT players may be priced out, reducing competition in a sector that has thrived on openness. Larger platforms will absorb the costs, potentially consolidating their dominance.

India’s regulatory framework is already crowded. The IT Rules of 2021 mandate grievance redressal and content classification. The Digital Personal Data Protection Act of 2023 imposes new obligations for data handling. Adding review panels risks overlapping jurisdictions and confusing creators, platforms, and enforcers alike. Vague rules could also open the door to selective enforcement, curbing dissent in the name of cultural protection.

Despite these concerns, the committee’s warnings cannot be dismissed. India is among the world’s most targeted countries for cybercrime. Deepfakes have been used in sextortion, impersonation, and financial fraud. False content has sparked communal tensions and endangered lives. Protecting minors from explicit material and citizens from deception is a legitimate goal for any law.

Cyber laws: Towards a balanced framework

Countries such as the United States and members of the European Union are experimenting with forward-looking frameworks that India could adapt. The EU’s proposed Artificial Intelligence Act, for instance, uses a risk-based approach, ranking technologies by potential harm and tailoring oversight accordingly.

The US, meanwhile, has leaned on voluntary codes of conduct developed jointly with tech companies, focusing on rapid incident reporting and transparency standards. Singapore has built regulatory sandboxes to test emerging technologies under supervision before wider rollout. These models demonstrate that flexible, layered approaches can build accountability without stifling innovation.

The challenge lies in striking the right balance. Watermarking must be feasible and interoperable across platforms. Review panels should work with clear criteria, transparency, and judicial oversight. Compliance costs must not stifle innovation. Equally important, the state must invest in digital literacy so that citizens can better spot manipulation.

The twin risks are evident: stronger protections against cyber harm may narrow space for artistic experimentation, while unchecked freedom exposes citizens to abuse. The task before lawmakers is to find the middle ground—guarding against deepfakes without weakening free expression. Ultimately, the legitimacy of new cyber laws will depend not on their severity, but on whether they protect society without silencing its storytellers.

Exit mobile version