
AI job loss hype: For over a decade, influential voices have warned of a coming wave of job destruction driven by artificial intelligence. A widely cited 2013 paper by Frey and Osborne claimed that 47% of US jobs were at high risk of being automated ‘in a decade or two.” That statistic, when applied to a projected US workforce of 180 million in 2033, suggests nearly 85 million jobs could disappear. More recently, Dario Amodei, CEO of Anthropic, forecast that AI could wipe out half of all entry level white collar jobs and push unemployment to 10–20% in just five years.
So far, these dire predictions have not materialised. There have been disruptions, but wholesale AI job losses attributed remain elusive. Still, these projections have influenced corporate strategies and government policy conversations. Will the deluge come later? This article argues that the dystopian future projected by AI job loss doomsayers is unlikely in the next decade. The reasons: Artificial intelligence systems remain relatively primitive, costly to maintain, and are hitting performance ceilings. The much-feared job displacement may eventually arrive, but not before the 2030s.
READ I Toy industry rebounds with smart regulation
Why tech adoption rarely moves fast
The primary reason jobs haven’t been lost en masse is that organisations rarely change their processes overnight. Shifts in business operations typically occur when there is a clear and measurable impact on key performance indicators—cost, revenue, speed, quality, compliance, or customer satisfaction. Even when these conditions are met, adoption of new technologies is gradual.
Consider outsourcing. Between 1979 and 2016, the US lost roughly 30 million manufacturing and service jobs to lower-wage countries like China and India—about 20% of its workforce over 37 years. This pace reflects the natural inertia of large economies. If outsourcing of this scale took decades, it is implausible to expect AI job losses of similar magnitude to occur in five or even ten years.
Several factors explain why organisations take so long to absorb new technologies. Most innovations require significant upfront investment in infrastructure, often forcing companies to delay adoption until they recover sunk costs from previous systems. The absence of a pressing need to overhaul functioning operations also slows things down—if the system isn’t broken, firms are reluctant to fix it.
In many cases, upgrading to new technology requires reimagining entire business models, which entrenched incumbents resist. Even when the technology is compelling, risk aversion and fear of failure act as deterrents. Retraining workers, adjusting to new government regulations, and bringing consumers up to speed all add to the delay. Internally, many firms are hampered by disjointed data systems that hinder integration and harmonisation—meaning even promising AI applications often run into organisational bottlenecks.
High TCO and the 92% accuracy wall
AI systems, like all software, need constant maintenance. But unlike conventional software, they demand a more complex form of upkeep. While DevOps for traditional software includes regular code refactoring, debugging, and interface improvements, AI systems also require ongoing model retraining, data re-labelling, and updates to stay relevant. This involves improving data quality, fine-tuning models to specific domains, and integrating external data sources. These activities push annual maintenance costs for AI systems to as much as 60% of the initial build cost—far higher than the 10–20% seen for traditional software.
Moreover, contemporary AI systems—particularly Deep Learning Networks and Large Language Models—appear to have plateaued at around 92% accuracy on broad datasets. While many vendors tout 99%+ performance, those claims usually apply to narrowly defined datasets, not real-world, edge-case-laden environments. AI models also fail to indicate where they are likely to be wrong, forcing users to manually verify every output. Confidence scores offer little relief. As a 2015 Carnegie Mellon study showed, neural networks can confidently misclassify meaningless images as familiar objects. Combined with the high costs of retraining and maintenance, this accuracy ceiling significantly impedes the deployment of AI at scale.
Where 92% just doesn’t cut it
Some use cases demand near-perfect performance, and AI’s current limitations rule it out entirely. Intelligent Document Processing (IDP) is one such domain. Consider the task of digitising invoices. Prior to modern AI systems, companies used basic OCR software to convert scanned invoices to structured data, followed by manual verification and correction. This process involved around 1,500 hours of labour and cost $24,000 for every 100,000 invoices processed.
With current AI systems pushing accuracy to 92%, the number of corrections required may drop, but the need to identify errors remains. The total time and cost savings are marginal, and the additional cost of using AI—approximately $10,000—often results in a net loss. This pattern repeats across other document-processing use cases. AI systems may reduce the volume of corrections, but their inability to highlight errors renders the exercise inefficient.
Similarly, in knowledge management systems—where precision is critical—AI systems falter. Most enterprise documents contain visual elements like tables, graphs, and checkboxes, which fall outside the training scope of most LLMs. As a result, even seemingly simple queries require full manual validation of source data, negating the benefits of automation.
Cascading failure in AI agents
The problem becomes more severe when AI agents are chained together to complete complex tasks. With each agent operating at 92% accuracy, a series of ten agents yields an overall success rate of just 44%. A Carnegie Mellon simulation of a virtual software company staffed entirely by AI agents confirmed this.
The best-performing system, Anthropic’s Claude 3.5 Sonnet, completed just 24% of its assigned tasks. Google’s Gemini model achieved 11.4%, while Amazon’s Nova Pro clocked in at a dismal 1.7%. These agents not only failed tasks but occasionally hallucinated, invented users, and fabricated data. The experiment served as a cautionary tale: AI agents are not yet ready for complex workflows.
When human consequences raise the bar
The 92% ceiling also renders AI inadequate in domains where outcomes directly affect human lives. In healthcare, AI tools must operate at near-perfect accuracy to be trusted by patients and insurers alike. Product safety requires similarly high standards, as do systems deployed in the criminal justice system—where a flawed risk assessment can influence sentencing. In recruitment, even small inaccuracies can lead to reputational harm or litigation.
The risks multiply in defence applications, where the use of lethal autonomous weapons without human oversight can have catastrophic consequences. In all such areas, AI must match or exceed human performance not just in speed but in judgement—a tall order at present.
Where 92% is good enough
Despite these limitations, there are areas where 92% accuracy suffices—particularly in augmenting human productivity. Many white-collar professionals now rely on generative AI to draft emails, write appraisals, summarise meeting notes, and translate documents. In software development, AI tools assist with low-level coding and bug fixes. In marketing, they generate blog posts and tailor messages to specific audiences. In research, they help with data gathering and preliminary analysis.
Customer service offers a particularly useful application. Some firms have experimented with replacing all human agents with AI, only to see failure. But hybrid models—where AI suggests responses and humans approve them—have led to significant efficiency gains. In such setups, companies have reduced their reliance on customer service staff by as much as 60%, without sacrificing quality. These gains, while modest in scope, are economically meaningful.
Across the US, roughly 7.6 million workers—or 4.7% of the workforce—are in roles where such hybrid systems are viable. Even if half of these jobs are automated, the resulting job loss amounts to just 2.6% of the total workforce. Spread over five years, that’s 0.5% annually—nowhere near the 10–20% unemployment spike Amodei warned of.
The quiet success of decision support systems
AI has found more consistent success in decision support systems (DSS), where its limitations are less damaging. DSS tools don’t replace humans but help them make better decisions by analysing large volumes of data. One practical example is travel reimbursement processing. Many organisations manually audit only a fraction of claims due to cost constraints. AI systems can flag potential violations across the entire dataset, enabling compliance teams to focus their efforts and avoid regulatory pitfalls.
The same logic applies to thousands of back-office processes involving compliance, optimisation, and cost control. Here, even with imperfect accuracy, AI systems enhance productivity without eliminating jobs. In fact, by improving KPIs like speed and reliability, they often lead to new hires in adjacent functions.
AI job losses: A more balanced future
In the long run, everything could change. If current trends in R&D investment continue, it is likely that AI systems will break the 92% ceiling within 10–15 years. Once they do, the pace of job displacement could accelerate. By 2050, the global workforce is expected to expand by 700 million, driven by population growth and economic development.
Even accounting for AI job losses—395 million displaced by artificial intelligence and automation—there will be an offsetting gain of over 1.3 billion jobs across sectors like healthcare, climate adaptation, infrastructure, and emerging technologies. This suggests that AI will not displace labour but redirect it.
The future may not be one of mass unemployment but of higher output and more complex work. If human society embraces climate action, infrastructure renewal, and new industries, the demand for labour could exceed supply. Contrary to popular fears, we may end up working more, not less.
The notion that AI will render human labour obsolete in the next few years is both exaggerated and misplaced. The technology is advancing, but its economic and technical limitations are real. The systems in use today are too expensive, too error-prone, and too limited in scope to replace a significant share of human jobs. Where AI augments rather than replaces human input, it is delivering tangible value. Where it attempts to displace humans entirely, it is largely failing. The path to a truly AI-powered economy will be gradual—not a cliff, but a slope. For now, fear of widespread job loss is not just premature—it is unsubstantiated.
Dr Alok Aggarwal is Chairman and Chief Data Scientist, Scry Analytics, California, USA.