AI in elections: ECI may struggle to tackle synthetic campaigns

use of AI in elections
Without legal and technical reform, the ECI risks being outpaced the widespread use of AI in elections.

Use of AI in elections: Elections are the heartbeat of a democracy; they are also its moment of greatest vulnerability. When technology begins to shape that moment, the line between persuasion and manipulation can vanish in an instant. Artificial intelligence, once a futuristic abstraction, has entered Indian politics through the back door — in the form of cloned voices, fabricated speeches, and videos that appear more real than reality itself. What was once an ethical question has become an administrative crisis.

The Election Commission of India has sounded the alarm ahead of the upcoming Bihar assembly elections. It has directed political parties to label AI-generated material and to remove deepfakes within three hours of detection. These pronouncements, though well-meaning, betray a deeper anxiety: that India’s electoral machinery, designed for posters, rallies, and manifestos, is now confronting algorithms, bots, and generative models. The Commission can warn, but it cannot yet police; it can advise, but it cannot yet prevent. The challenge before it is not one of intent, but of capacity to safeguard democracy in an age when truth itself can be manufactured.

READWhy the India-EU FTA matters more than ever

The use of AI in electioneering

The 2024 general election marked a turning point. For the first time, Indian voters encountered large-scale use of AI in elections — deepfakes, voice clones, and micro-targeted campaign material. Several studies documented the explosion of synthetic content that blurred the line between authentic and fabricated political messaging. Investigations revealed how voice-cloning tools were used to reproduce politicians’ voices in regional languages, allowing personalised phone calls to reach millions of voters.

Use of AI in elections

AI’s reach went far beyond campaign strategy. It invaded the information ecosystem itself — spreading manipulated images, deep-faked speeches, and misleading videos faster than fact-checkers could respond. The influence of generative AI in elections extended beyond persuasion to distortion, eroding public trust in the democratic process.

To its credit, the ECI reacted swiftly. It issued advisories urging political parties to label AI-generated content and remove deepfakes within three hours of detection. It expanded its social-media monitoring units and sought cooperation from major platforms. Yet these measures, while well-intentioned, remain woefully inadequate for the scale and sophistication of the threat.

Why institutional capacity is under strain

At the heart of the problem lies the mismatch between 20th century regulatory mechanisms and 21st-century technologies. The ECI’s powers, defined by the Representation of the People Act and a set of broad constitutional mandates, were designed for an era of print pamphlets and televised rallies — not algorithmic persuasion or AI-generated avatars.

The Commission’s current guidelines, though commendable, operate on a reactive model. They require platforms and parties to remove objectionable or fake content once notified. But identifying synthetic content, tracing its origin, and acting within a three-hour window is nearly impossible. Deepfakes can be generated in seconds, disseminated instantly, and replicated endlessly. Even if the most extreme fears about the use of AI in elections are exaggerated, the risks are undeniably real—and the existing safeguards insufficient.

AI in elections

Equally troubling is the absence of a coherent legal framework. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 provides limited guidance on online misinformation, but they do not address AI-enabled campaigning. The ECI has no statutory authority to audit campaign software or demand disclosures on the use of AI in elections. It cannot compel social-media companies to share data on micro-targeting algorithms or ad placements. The consequence is an uneven playing field—where those with access to advanced AI systems can manipulate the electorate under the radar.

The weakness is not only legal but also technical. Detecting synthetic content at scale requires advanced data infrastructure, machine-learning detection tools, and coordination across multiple digital platforms. Yet, most AI-generated campaign material on Meta’s platforms carried no disclaimer. Monitoring remains patchy, reactive, and heavily dependent on public complaints rather than real-time detection.

AI in elections: Global warning signs

Other democracies offer sobering lessons. The gravest danger does not lie in the technology itself but in institutional failure to keep pace. Across Europe, the concern over the use of AI in elections stems less from its sophistication and more from the inability of regulators to detect and deter its misuse. OECD’s 2025 review reached a similar conclusion: democratic vulnerability increases when citizens face a flood of manipulated content without institutional safeguards.

India’s context is even more complex. With over 800 million internet users, 22 official languages, and a rapidly growing vernacular social-media space, the scale of content generation and dissemination is unprecedented. The ECI is not just combating misinformation; it is attempting to monitor an entire digital universe that regenerates every minute.

The ECI’s multi-front challenge

The Election Commission today operates across three overlapping domains: campaign law, digital regulation, and technology governance. It has decades of experience in enforcing electoral codes, policing campaign expenditure, and curbing hate speech. It has made early strides in digital oversight through its partnerships with Google and its “Myth vs Reality” initiatives against misinformation. But the frontier of AI governance is different. It demands forensic expertise, algorithmic audits, and cross-platform cooperation at a level that few regulators possess.

The imbalance is glaring. When revealed that Meta had approved AI-manipulated political ads containing hate speech in multiple Indian languages, the ECI could do little more than issue a notice. It cannot directly suspend such content or penalise platforms without prolonged coordination. Meanwhile, voice-cloned bots and synthetic videos continue to circulate freely. The asymmetry is complete: campaigns wield advanced AI to shape perception, while the regulator struggles to verify authenticity.

Towards a stronger electoral safeguard

To restore balance, India needs structural reform. The ECI must be armed with explicit statutory authority to oversee use of AI in elections. Election laws should mandate disclosure of AI tool usage, campaign automation, and targeted-ad data. Without legal clarity, the Commission will remain dependent on voluntary compliance.

Equally essential is investment in detection and technical capacity. A permanent digital-forensics unit—staffed with AI specialists and data scientists—should be established within the ECI. This team must collaborate with social-media companies, cybersecurity agencies, and independent watchdogs to identify and neutralise synthetic content in real time. Transparency from technology platforms is vital; India’s proposal to label deepfakes under new IT rules is a step forward, but compliance must be mandatory and verifiable.

Finally, voter awareness is the most effective long-term defence. Citizens must learn to question the authenticity of what they see and hear. Election campaigns should not merely urge people to vote—they must also teach them to verify. The credibility of democracy depends not only on casting the vote but also on ensuring that the choice is informed, not manipulated.

The Election Commission of India stands at a crossroads. It has shown resolve in tackling conventional challenges—money power, muscle power, and misinformation. But the arrival of artificial intelligence represents a new frontier altogether. Deepfakes, cloned voices, and algorithmic persuasion can undermine democracy from within, eroding trust before a single vote is cast.

The ECI’s response so far—a patchwork of guidelines, advisories, and platform cooperation—is a beginning, not a solution. The task now is to fortify the institution with law, technology, and legitimacy. If the Commission cannot evolve as swiftly as the technologies arrayed against it, India risks witnessing elections where the loudest voice is not human, but machine.