• By Dr Ajay Kumar
  • Wed, 11 Feb 2026 11:13 AM (IST)
  • Source:JNM

This article tries to peer into the future, 10 to 20 years from now. For most of human history, civilisation has been shaped by scarcity. Scarcity of food, land, labour, capital, and later, knowledge. Our economic theories and political institutions evolved to manage shortage. Artificial Intelligence (AI) threatens to overturn this foundational condition. If the twentieth century was about mastering scarcity, the coming decades may be defined by something far more destabilising: surplus. The central challenge may not be producing enough, but finding meaning, legitimacy, and stability in a world where too much comes too easy. Erik Brynjolfsson has said that technology is racing ahead, but our institutions are lagging far behind. What kind of transformation in policy-making would be required for it to cope with the upheaval likely to be caused by AI?

Classical economics rests on four factors of production: labour, capital, land, and technology. Historically, technology complemented labour. Each major technological shift raised productivity, increased wages, expanded demand, and created new forms of employment. Disruption was painful, but temporary. AI breaks this pattern.

ALSO READ: India Tightens IT Rules: Social Media Must Label AI Content, Take Down Govt-Flagged Materials Within 3 Hours

For the first time, technology substitutes labour directly. Not only manual labour, but intellectual, professional, and creative work as well. With AI, productivity continues to rise, but job creation does not follow. With no jobs, productivity gains will not translate into shared prosperity. Instead, AI will accelerate inequality.

AI is capital and data-intensive in a way no previous technology was. Once a foundational model is trained, marginal costs approach zero. There is little economic logic for wealth to flow back to labour. Capital captures nearly all the upside. This creates natural monopolies. This explains the hyper-concentration already trends of which are already visible. A handful of firms, largely in a few countries, dominate foundational models.

AI also collapses time. Decisions that once took days or weeks can occur in milliseconds. Markets react faster than institutions can respond. This temporal change has huge implications for regulation, democracy, and even warfare.

Scenario 1: Many experts foresee a scenario that could be deeply disruptive or even outright destabilising for human society. This is not assuming runaway machine control, or rogue AI development, or catastrophic misuse of AI which is one doomsday scenario. In this scenario, if we make a benign assumption of a strong regulatory framework and that AI systems are aligned with societal development goals.

ALSO READ: Microsoft’s New Marketplace Could Fix AI Content Licensing And I See Why It Matters

In this world, productivity and efficiency surge. Goods become cheaper, services improve, and material living standards rise. Countries that successfully deploy AI experience speedy economic growth. Healthcare improves dramatically, extending life expectancy and reducing disease burdens. People don’t die and there is no new babies born. As longevity increases and work decreases, the demographic dividend loses relevance. Countries or societies dependent on remittances face decline. The very idea of employment as the basis of dignity and earnings begins to erode.

Another destabilising effect would be on concentration of wealth. Since AI is likely to enhance hyper concentration, directed policy measures will become necessary to redistribute abundance. Universal Basic Income (UBI) becomes unavoidable. Without active public policy intervention in favour of redistribution, mass economic exclusion may result.

A subtler risk lies in cognitive atrophy. As machines take over judgment and decision-making, humans lose the ability to learn. Historian David Rochlin once described how pilots over-reliant on automation failed to fly when systems broke down. As someone said, the real threat is not that machines will behave like humans, but that humans will behave like machines.

The information ecosystem deteriorates further. AI creates a surplus of content but a deficit of truth. Deepfakes and synthetic narratives overwhelm human cognition. Consent, legitimacy, and trust erode. Democracies struggle as citizens lose the ability to distinguish reality from fabrication.

Power, meanwhile, concentrates further. AI firms grow so large that they rival or exceed states in economic influence. New oligarchies emerge. Geopolitically, AI deepens inequality and countries without access to advanced AI fall irreversibly behind. The AI-divide dwarfs the digital-divide. Control increasingly rests not with governments, but with AI systems embedded within them. Consequently, sovereignty becomes mediated by algorithms trained elsewhere. This scenario is orderly on the surface, but destabilising at the core.

Scenario 2: There is, however another scenario, a more positive outcome that could emerge with the right policy frameworks in place. In this future, policy recognises AI not merely as an economic tool, but as a civilisational force. AI is used to amplify human capability rather than replacing human purpose. AI would enable solving hard problems which require superhuman effort. For example, open new frontiers in outer-space, under sea or under earth’s surface. Multiplanetary existence of humans becomes a reality. We understand and manage nature and climate better. And so on.

Further, certain human qualities that even Artificial General Intelligence (AGI) cannot replicate like empathy, care, love, moral responsibility, and the instinct to preserve life will become central. AI may cure patients but love and caring of near ones would remain indispensable and even more valuable. AI would also be used to actively defend truth, in authentication and rapid feedback mechanisms. For this the governance must become global, anticipatory, and technologically informed. AI safety is not a competitive advantage; but a collective responsibility. The idea of Vasudhaiva Kutumbakam-the world as one family moves from philosophy to survival strategy. India, with its civilisational depth and normative credibility could help shape global AI norms for the new governance structures.

Whether AGI arrives in 2026 or later is less important than the direction: cognitive abundance is accelerating and is going to lead to an age of surplus which will challenge humanity more deeply than any previous technological revolution. Without wisdom, surplus leads not to freedom, but to fragility. The decisive question is not whether machines will become intelligent, but in a world where everything becomes too much and too easy, wisdom may be the rarest and most decisive resource of all.

(The author is the Chairman of UPSC and former Defence Secretary of India. The views expressed are his own.)


Also In News