Digital divide affecting public service delivery: India’s push for digital IDs and algorithms is creating a hidden layer of inequality. The Aadhaar system, which forms the basis for effective and transparent delivery of several government welfare schemes, has become “more a barrier than an enabler”, especially for women in the informal sector.
About 36 percent of 200 migrant women workers interviewed for a study said they faced biometric authentication failures during pregnancy-related hospital visits. What will be the human cost if the future of welfare schemes is direct benefit transfers enabled by authentication mechanisms, biometrics and artificial intelligence?
Algorithmic bias is not new. Several years ago, a celebrated book focused on the severe gender and racial biases embedded in Google’s autosuggestions. Most illustrations in the book continue to be valid today. For instance, a Google Images search for the term “beautiful” throws up not paintings or a mountain but hundreds of women’s faces. The women are young, light-skinned and slim.
READ I US-Pakistan bromance tests New Delhi’s diplomatic nerve
A second example is how ChatGPT generates different letters of recommendation for men and women students with identical scholarly achievements. Men are described as “ambitious”, “driven” and “leaders” while women are “compassionate”, “supportive” and “team players”. Similarly, Amazon scrapped an AI hiring tool that downgraded applicants with the word “women’s” (“women’s chess club”) in their resume.
Even digital mapping platforms such as Google Maps and Wikipedia reflect stark geographic inequalities, with significant under-representation of the Global South. Despite high population densities, regions such as South Asia and Africa remain digitally marginalised.
The examples cited above are technology-specific but hit home when digital governance is used as the primary mechanism for welfare delivery.
These issues point to a larger problem – how to measure inequality. Economists have long done so through income shares, wage ratios and asset distributions. These tools help understand how the top 1 percent in India accumulated huge wealth, how caste and gender impact hiring decisions and how inflation hits the poor and the rich differently.
Digital divide’s other side
But there is another kind of inequality – harder to measure, easier to miss. It is inequality that emerges when systems fail to recognise people or when digital divides shut out millions.
Take a worker who shows up in a government database but still does not get paid. This is not income or consumption inequality. It is the inequality of legibility, akin to a lack of participation parity (in American philosopher Nancy Fraser’s terms).
India’s flagship rural employment guarantee scheme, MGNREGA, is one of the world’s largest social safety nets, at least on paper. It guarantees 100 days of paid work to rural households.
However, even when workers finish their tasks and their attendance is marked, their wages do not arrive for weeks or months. The problem sometimes lies in a chain of digitised processes: Aadhaar-based attendance, app-based worksite monitoring, centralised fund releases. These Aadhaar components were devised to improve transparency. Yet, in practice, they often create new layers of opacity.
A small mismatch in Aadhaar data, a failed biometric scan, a delay in updating a job card or a payment stuck in a technical queue get captured in the Gini coefficient, a statistical measure of income or wealth inequality. These do not register in monthly employment statistics but they shape lives just as profoundly.
This is the kind of inequality that standard models or tools – models that focus almost exclusively on quantifiable gaps in income or consumption are not built to capture: a gap in earnings but in recognition. It was reported in 2018 that “out of 12 cases of starvation deaths identified by us, seven are related to Aadhaar in one way or another”. This shows that biometric failures have had deadly consequences, be it for MGNREGA or the public distribution system (PDS).
Increasingly, social welfare schemes, work and access are mediated by digital systems. Whether PM Kisan payments, crop insurance enrolments or e-Shram registrations, the assumption is that digital systems will reduce inefficiencies and corruption. Rural workers (indigenous people, socially backward, women and other minorities) are expected to have uninterrupted internet, consistent biometric access, familiarity with apps and the ability to contest an error logged somewhere far away.
The algorithms embedded in these technologies either screen individuals for eligibility or create a layer of intermediaries who charge fees for making these technologies accessible.
Digital divide and exclusion
The digital divide in India is already high. A 2022 Oxfam report shows that only 31 percent of the rural population used the internet as compared to 67 percent of the urban population. These figures underscore how many are effectively cut off from digital services. Future research could look at how differential access to technology creates divides across different geographies.
In the race to reduce inefficiencies and increase optimisation for major government schemes, the move towards the direct benefit transfer is making things worse for those who are not recognised in the system. Money transfer directly into a bank account is a good idea for those who are digitally literate to obtain what is legally theirs; for others who are left behind, it is not as feasible.
The move towards cash transfers also does not insulate people from the associated risks of price shocks. This has been proved in the context of the PDS where with in-kind grain rations, households don’t have to worry about rising food prices cutting into their meals. By contrast, cash transfers leave them vulnerable when prices climb.
Quantitative precision and inequality
Quantitative precision is not enough while focusing on inequality. Not only should patterns in data narratives be recognised, there should be greater emphasis on how adjectives are gendered (men described as “assertive” or “decisive” versus women as “empathetic” or “cordial”), what comprises “training data” and what the model assumes as default or normal.
Efforts must be made to interview those who are purposely left out of digitised platforms, denied benefits, misclassified or are invisible in the system. India’s recent push toward data-driven governance (such as Aadhaar-linked benefits, digital health and welfare apps, and biometrics in PDS and MGNREGA) impacts marginalised populations severely.
As for revisiting the quantitative metrics, the use of Gini coefficient is standard but the invisible or unaccounted – the homeless, transgenders, migrant workers, domestic help, and others – must be captured in data on inequality.
The challenge is not just about measuring inequality. It is that inequality is being learned by machines and normalised through codes. The modes of measuring inequality must be reconsidered. When inequality is written in algorithms, it must be resisted powerfully, precisely and provocatively in all forms.
Prachi Bansal is Assistant Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University. Ritika Agrawal, an MBA student at the Indian School of Business, also contributed to this article. Originally published under Creative Commons by 360info