Generative AI puts India’s outdated copyright and liability laws to test

Generative AI
With lawsuits mounting, India must urgently reform its copyright and liability laws to regulate generative AI responsibly.

That generative AI is no longer a novelty is widely accepted. But what is yet to fully register is the extent to which it is reshaping how societies create, consume, and contest content. The implications are profound—not only for technology but for law, culture, and sovereignty.

India is now at a critical juncture. As AI systems scrape, synthesise, and generate content at industrial scale, legal uncertainties around authorship, ownership, and liability are piling up in courtrooms, boardrooms, and newsrooms alike. The foundational question—who owns what in the age of machine-generated content—remains unresolved.

READ | Wage growth freeze is hurting India growth story

Outdated laws, urgent stakes

A government-appointed panel is reviewing the Copyright Act of 1957—legislation drafted long before the internet, let alone AI. The urgency for reform is underscored by a flurry of legal actions from media houses such as NDTV, The Indian Express, and Hindustan Times, accusing OpenAI of using their copyrighted material to train its models without consent. India’s current legal framework was never designed to address such complexities.

Unlike the UK’s Copyright, Designs and Patents Act of 1988—which attributes authorship of computer-generated works to the entity that made the necessary arrangements—India’s copyright regime offers no clarity. This legislative vacuum leaves creators, developers, and platforms in a legal grey zone. Ambiguity, in this context, breeds exploitation and erodes public trust.

Global action in generative AI regulation

Globally, countries are adapting. The European Union’s Artificial Intelligence Act mandates transparency in training datasets and requires labelling of AI-generated content. In the US, lawsuits are testing the boundaries of ‘fair use’ in AI training. Canada and Australia are exploring liability regimes, while China has introduced rules requiring traceability and source disclosure for synthetic media.

India, by contrast, continues to rely on legacy copyright laws and a reactive takedown model under the Information Technology Rules. This approach is no longer tenable. The stakes go beyond regulatory compliance—they touch on digital sovereignty. OpenAI’s recent contention that Indian courts lack jurisdiction due to its US-based infrastructure signals a deeper problem. When foreign AI services operate freely in Indian markets, legal ambiguity becomes a strategic liability.

The cost of inaction

Regulatory silence carries a heavy price. Without clear norms on ownership, liability, and licensing, India risks becoming a net exporter of training data—while losing out on the economic and creative value of its own content. The power asymmetry favours large foreign firms, reducing Indian creators to mere inputs in a global algorithmic economy.

The cultural costs are no less severe. India’s vast linguistic and narrative diversity—from Bhojpuri songs to Malayalam cinema, Assamese folklore to Tamil publishing—is either underrepresented or misrepresented in global AI models. Without mandates for inclusive datasets and attribution norms, generative AI risks flattening India’s pluralism into a homogenised, Anglophone norm.

Ethical hazards also loom large. Caste and gender biases, misinformation, and synthetic defamation are no longer hypothetical—they are real and recurring. India’s diversity makes it especially vulnerable to algorithmic harm. The European Union’s risk-based oversight model offers one possible path. India needs a similar framework tailored to its complex social fabric.

Moreover, the indiscriminate scraping of public data by AI developers runs counter to the spirit of India’s Digital Personal Data Protection Act. Without audit protocols and consent mechanisms, the promise of user-centric governance rings hollow.

A four-pillar framework for AI governance

India needs a clear, structured response. The current piecemeal approach must give way to a comprehensive regime rooted in accountability and legal certainty. A four-pillar framework can guide this transformation:

Copyright and IP reform: Amend the Copyright Act to define authorship in AI contexts, clarify ‘fair use’, and mandate licensing arrangements between developers and content owners. Explicit provisions are needed for derivative works, especially in cases where human input is minimal.

AI liability legislation: Introduce a dedicated statute to allocate responsibility across the AI content chain—from developers to platforms to users. This law must include statutory presumptions, safe harbour rules, and due diligence obligations, especially in high-risk sectors like finance, media, and education.

Institutional infrastructure: Create a specialised AI regulatory body with legal and technical expertise. This entity should oversee transparency in training data, watermarking of AI-generated content, and conduct public audits. A “National AI Dataset Commons” can help build indigenous capacity, reduce dependence on foreign models, and ensure ethical data sourcing.

Global norm-shaping: India must actively participate in shaping international AI governance. As a country that sits between Silicon Valley’s libertarianism and Beijing’s algorithmic control, India can offer a democratic model grounded in dignity, equity, and oversight. Platforms such as the G20, BRICS, and WIPO must be leveraged to promote this alternative vision.

Building legal and institutional capacity

Laws alone won’t suffice. The judiciary must be equipped to handle AI-related litigation with expertise and speed. AI-literate judicial benches and technical panels will be crucial. Without building institutional competence, even the best-drafted statutes risk poor implementation.

India must also confront a deeper challenge—its regulatory institutions were built to interpret law, not technology. Most are led by generalists, not technologists. Intellectual property has long been treated with policy indifference. Now, as data becomes the currency of power, that attitude must change.

A leadership moment

India has repeatedly demonstrated its ability to leapfrog with innovations like Aadhaar, UPI, and the Digital Personal Data Protection Act. But generative AI presents a more dynamic, unpredictable challenge. What is needed is not just intent, but agility. Legacy laws must evolve into modular frameworks that can keep pace with technological shifts.

The battle for AI leadership is not just about algorithms or infrastructure—it is about who controls narrative, knowledge, and norms. India must protect its creative economy, empower its innovators, and build a rulebook for the AI century that reflects its democratic ethos.

This is not a call for overregulation, but for smart, adaptive governance. The world is watching. Whether India can lead responsibly in the age of AI will depend not on its code, but on its laws.

Srinath Sridharan
Website |  + posts

Srinath Sridharan is a strategic counsel with 25 years experience with leading corporates across diverse sectors including automobiles, e-commerce, advertising and financial services. He understands and ideates on intersection of finance, digital, contextual-finance, consumer, mobility, Urban transformation, and ESG. Actively engaged across growth policy conversations and public policy issues.