Equitable AI needs democratic governance, not declarations

Equitable AI
Pledges at the New Delhi summit sound reassuring, but equitable AI will fail without democratic oversight, accountability, and contestable public systems.

Equitable AI: When eighty-eight countries recently pledged at the global AI Impact Summit in New Delhi to pursue “equitable artificial intelligence,” the declaration sounded reassuring. But reassurance is not readiness. Behind the language of consensus sits a harder question: is AI, as it is being built and deployed, structurally inequitable and quietly destabilising democratic governance?

This is no longer theoretical. AI is not inherently unjust or authoritarian. But it is being developed in a world shaped by unequal power, capital, knowledge, and institutional capacity. It does not simply reflect those conditions. It amplifies them. Left unguided, it can become a powerful mechanism for concentrating wealth, authority, and decision-making beyond public reach.

READ | AI anxiety and the fear economy of technological change

For now, AI appears benign. It drafts text, assists clinicians, optimises logistics, and automates routine tasks. But transformative technologies rarely disclose their full consequences at adoption. The deeper disruptions surface later, once dependency sets in and institutional choices harden.

Artificial intelligence will reshape economies, labour markets, governance systems, trade, supply chains, and inter-country dependencies. The effects will be uneven. For technologically advanced economies, AI may accelerate productivity and wealth creation. For poorer countries and vulnerable populations, it may bring job displacement, policy dependency, and a loss of agency over decisions that shape daily life. Equity is not a moral add-on to AI governance. It is a political and economic requirement.

AI knowledge bias and digital exclusion

At the cultural level, AI raises a basic question: whose knowledge does it learn from?

Most large AI systems are trained predominantly on English-language, Western, urban, and digitally visible data. Vast domains of human experience — oral traditions, indigenous knowledge systems, local governance practices, and non-digitised social realities — remain under-represented.

Over time, this creates a hierarchy of relevance: what is digitally documented appears universal; what is not drops out of policy imagination. An AI ecosystem that marginalises large segments of humanity cannot credibly claim equity.

AI governance and state capacity gaps

AI and data-driven automation are already shifting the balance of power between states and markets. A small number of technology corporations and research ecosystems control large parts of the AI stack. Many governments, especially in the developing world, still lack the technical capacity to independently audit, interpret, or regulate the systems they increasingly rely on.

READ | Can artificial intelligence strengthen democracy? Italy offers a test case

Algorithm-assisted tools are now used to identify welfare beneficiaries, flag tax risks, allocate policing resources, and prioritise public services. Evidence across jurisdictions shows that these systems can produce exclusion and error, often with limited avenues for explanation or appeal, because decision logic is opaque and responsibility is diffuse.

In India, efficiency claims around Direct Benefit Transfer systems must be read alongside documented correction cycles and grievance redressal requirements in schemes such as PM-KISAN, where beneficiary identification and verification have required repeated revisions. The point is larger than any one programme. When administrative decisions become more automated but accountability mechanisms do not keep pace, efficiency gains can come at the cost of legitimacy and public trust.

AI inequality and labour market disruption

The economic implications are equally stark. AI’s productivity promise masks a familiar pattern: technological revolutions reward capital faster than labour unless policy intervenes.

Unlike earlier automation waves, AI targets not only manual work but also clerical, logistical, diagnostic, and analytical roles. Large segments of service-sector employment in developing economies face substantial automation risk without matching reskilling capacity. Without deliberate intervention, AI will widen inequality within countries and deepen disparities between them, creating a world where intelligence is exported by a few and dependency imported by many.

READ | Can India power the artificial intelligence dream?

Algorithmic harm and social vulnerability

The social effects follow the same lines. Algorithmic systems already shape access to credit, employment, healthcare, surveillance, and public benefits. Empirical evidence shows that errors and biases in these systems disproportionately affect marginalised communities, whether defined by income, caste, race, gender, or geography.

Algorithmic harm is not random. It tracks existing vulnerability. Equity therefore requires more than neutral code. It requires active protection where risk is highest.

Democratic governance in the age of AI

The deepest challenge lies in governance itself. AI is no longer only a policy tool. It is becoming a governing instrument.

Predictive systems influence police deployment. Automated eligibility tools shape access to welfare. Algorithmic risk scores affect regulatory scrutiny and judicial outcomes. These systems change how state power is exercised, often without corresponding democratic debate.

Democracy is slow, deliberative, and argumentative. AI privileges speed, optimisation, and probabilistic outcomes. Introduced without safeguards, it can push democratic governance toward something efficient but unaccountable. The risk is not overt authoritarianism alone. It is silent depoliticisation.

Decisions once justified through public reasoning begin to be validated by algorithmic authority. Responsibility dissolves into technical complexity. Citizens are governed by systems they cannot meaningfully question, understand, or vote out.

In authoritarian settings, AI multiplies surveillance and anticipatory control. In democracies, it can normalise governance without consent, where participation is replaced by prediction and accountability by technical deference. Democratic AI governance is therefore not a normative preference. It is a constitutional necessity.

Ethical AI is not enough without accountability

Much of the global conversation still centres on ethical AI. Ethics matter, but they are insufficient. Ethics without democratic control can legitimise concentrated power.

AI governance must be democratic in both substance and process. If algorithms shape public outcomes, then the rules governing their use must be subject to public consent. Decisions affecting rights and dignity must be explainable, contestable, and reversible, with meaningful human oversight rather than symbolic supervision.

Declarations will not determine outcomes. Design choices will. So will regulatory courage and democratic vigilance. Equitable AI will not emerge spontaneously from markets or innovation hubs. It must be built intentionally, with the recognition that AI is not only a technological shift but a civilisational one.

AI will transform societies. What remains undecided is whether it will concentrate power and hollow out democracy, or be shaped to strengthen inclusion, accountability, and shared prosperity. The pledge by eighty-eight countries is a beginning. But equity without democracy is fragile, and efficiency without accountability is dangerous. In the age of intelligent machines, the most necessary idea may be the simplest: the governance of AI must remain human, participatory, and democratic.

Suresh Kumar is a former Additional Chief Secretary, Government of Punjab. He is a Cambridge Fellow and an LSE alumnus.

READ | Artificial intelligence needs real leadership — not just loud voices