Democracies risk sliding into surveillance states

Democracies
Democracies worldwide are embracing AI and data tools that threaten to erode privacy and liberty in the name of security.

George Orwell’s 1984 once read like dystopian fiction. Today, with artificial intelligence, predictive policing, and ubiquitous social media platforms, democracies risk becoming surveillance states not through overt authoritarian decrees but through invisible digital architectures. What was once the work of secret police and wiretaps is now executed by algorithms that monitor behaviour, track movements, and predict intent. The paradox is stark: societies that celebrate liberty and privacy are constructing vast systems of control that echo Orwell’s nightmare, only with machine precision.

The controversy surrounding Palantir in the United States is emblematic of a larger global trend. Reports alleged that the company’s Foundry platform was used by federal agencies to consolidate data on citizens — charges the firm denied. Regardless of the veracity, the debate highlighted the uneasy fusion of private technology with state power.

READ | CAD fall hides rising risks for India’s trade outlook

This pattern is not unique to America. In the United Kingdom, police have deployed facial recognition at protests, prompting civil liberties concerns. In France, counter-terrorism laws have enabled extensive digital monitoring, while in Australia, metadata retention laws compel telecom providers to store citizens’ communications records for law enforcement. In each case, liberal democracies are adopting tools once associated with authoritarian states, justified in the name of efficiency and security.

AI-driven policing: Promise and peril

India provides a cautionary example. The Delhi government has installed nearly 280,000 cameras, with thousands more planned. States like Andhra Pradesh are preparing to deploy artificial intelligence in every police station. Supporters hail these steps as a triumph of smart governance and crime prevention. Yet studies, including those by the Internet Freedom Foundation, show that such tools are riddled with bias and rarely audited for fairness.

Facial recognition technologies in particular reveal troubling error rates: almost negligible for light-skinned men but alarmingly high for dark-skinned women. Predictive policing algorithms, trained on flawed historical data, risk reinforcing systemic biases and unfairly targeting marginalised communities. Social media monitoring, meanwhile, allows authorities to label citizens as potential offenders without due process.

The result is a creeping normalisation of suspicion — an environment where liberty is subordinated to efficiency, and where the citizen is no longer presumed innocent but algorithmically profiled as a risk.

Constitutional dilemmas and legal vacuums

The Supreme Court of India, in the landmark Puttaswamy judgment (2017), affirmed privacy as intrinsic to the right to life and liberty. It laid down a three-part test for state surveillance: legality, legitimate aim, and proportionality. Yet the rapid deployment of AI-driven policing routinely bypasses these safeguards.

Automated facial recognition systems operate without statutory authority. Social media monitoring has no framework for judicial oversight. Predictive policing tools function in secrecy, shielded from independent audits. India’s Digital Personal Data Protection Act (2023) is still not operational, leaving citizens exposed to unchecked data harvesting.

Other democracies grapple with similar challenges. In the United States, cities such as San Francisco and Boston have banned police use of facial recognition due to concerns of racial bias. The European Union’s AI Act (2023) goes further, classifying real-time facial recognition in public spaces as high-risk and banning it except in narrowly defined cases. India, by contrast, continues to expand surveillance without comparable regulatory guardrails.

Social media: The invisible panopticon

While state surveillance often attracts attention, an equally insidious form is built into social media platforms. Companies like Meta, X, and TikTok harvest vast troves of behavioural data, ostensibly for targeted advertising. Yet the same data is increasingly shared with governments for law enforcement and national security.

Algorithms that curate feeds also monitor dissent, amplify polarisation, and enable the mapping of social networks at unprecedented scale. The Cambridge Analytica scandal showed how microtargeted manipulation can distort elections. In Brazil, WhatsApp forwards have been used to track and influence voter behaviour. In the Philippines, state agencies have worked with platforms to monitor activists and journalists. The convergence of corporate data collection with state surveillance ambitions creates a digital panopticon, where every click, like, and post may become evidence in a future investigation.

What was once a tool for free expression is now a reservoir of surveillance, feeding both corporate profits and government control.

Security without liberty is hollow

The defence of these technologies rests on a familiar refrain: they enhance safety. Proponents argue that predictive policing deters crime, that facial recognition captures fugitives, and that social media monitoring prevents terrorism. But such claims ignore the constitutional truth: rights exist precisely to limit the excesses of state power, even in the pursuit of security.

Mass surveillance chills dissent, undermines free speech, and corrodes the trust between citizen and state. A democracy that accepts permanent surveillance in exchange for ephemeral safety risks hollowing out its core values. Security without liberty is indistinguishable from authoritarian rule.

The path forward requires urgent reform. India—and other democracies—must:

Legislate clear limits: Surveillance must be backed by explicit laws subject to parliamentary debate, not executive orders or agency contracts.

Ensure judicial oversight: Any intrusive surveillance should require prior approval from independent authorities, as mandated for telephone tapping in PUCL v. Union of India (1997).

Mandate algorithmic transparency: Governments must disclose the logic, training data, and error rates of AI systems used in policing.

Enforce accountability: Independent audits, data minimisation, and time-bound destruction of non-relevant data should be mandatory.

Adopt best practices: India should learn from the EU AI Act’s precautionary approach, enacting moratoriums on high-risk technologies until safeguards are in place.

Without such reforms, democracies will drift into becoming surveillance states by stealth, not decree. Orwell’s telescreens have been replaced by cameras, sensors, and algorithms. The danger is no less real. The challenge for our times is to preserve liberty in the face of efficiency, and to ensure that technology serves democracy rather than subverts it.