The honest version of India’s AI sovereignty plan acknowledges that we will continue to depend on American compute, foundation models, and cloud control planes for at least a decade
FUTURECRAFT
On 27 February, the Pentagon did something it had previously reserved for Huawei. Defence Secretary Pete Hegseth designated Anthropic, the American maker of the Claude AI model, a national security supply chain risk. President Donald Trump separately ordered every federal agency to cease using the company’s technology. The trigger was Anthropic’s refusal to let its models be used for fully autonomous weapons or domestic mass surveillance, two carve-outs the Pentagon wanted removed.
For Indian readers, this looked like a Silicon Valley quarrel. It was something else. It was a public demonstration that the United States can switch off any AI company in the world, on a six-month timer, by executive order. Anthropic has since sued, won a preliminary injunction in San Francisco, and as of late April, Trump told CNBC that a deal is “possible.” The legal contest is real. The precedent is set.
Every Indian IT firm with a US defence contract just learned that their access to American AI is conditional on whatever the Pentagon decides counts as “lawful purposes” that quarter. This is the sovereignty mirage. India has built its national AI strategy on the assumption that the people who own the switchboard will keep the lights on. They will. Until they decide otherwise.
The Stack We Do Not Own
Walk through the IndiaAI Mission’s headline numbers, and the picture looks confident. A Rs 10,371.92 crore outlay over five years. Over 38,000 GPUs onboarded to a national compute portal at subsidised rates of around Rs 65 per hour. 190 AI projects approved. Sarvam AI tasked with building a 120-billion-parameter sovereign foundation model. A roadmap to 100,000 GPUs by the end of 2026.
Now look at what those numbers actually represent. MeitY told the Rajya Sabha in February that just Rs 21.79 crore was released in 2024-25 against a revised estimate of Rs 173 crore, and Rs 379.15 crore in 2025-26 against Rs 800 crore. Two years in, roughly Rs 400 crore of the headline Rs 10,372 crore has actually moved. Nothing has been released yet against the 2026-27 budget estimate of Rs 1,000 crore. The mission is, on paper, larger than what most countries have attempted. In practice, it is a rounding error against the spending of the firms whose hardware it relies on.
The five biggest American hyperscalers are projected to spend somewhere between $660 billion and $720 billion on capital expenditure in 2026, roughly three-quarters of which is AI infrastructure. Microsoft alone is tracking toward an annual run rate of around $150 billion. India’s entire five-year national AI mission, at current exchange rates, is about $1.2 billion. Microsoft spends that in roughly three days.
The compute is rented hardware. The accelerators powering the IndiaAI portal are mostly Nvidia chips, designed in Santa Clara, fabricated by TSMC in Taiwan, sold under US export controls that Washington can tighten without consulting Delhi. Yotta’s chief executive Sunil Gupta, has said publicly that India already imports between 20,000 and 25,000 high-end GPUs a year, around $2 billion worth, and that population-scale AI will require this to multiply many times over.
The foundation models that Indian startups build on top are largely American or Chinese. The cloud regions that host the workloads, even when physically located in Mumbai or Hyderabad, are operated by AWS, Azure, or Google Cloud under terms of service that include compliance with US sanctions law.
The Anthropic Lesson
The Pentagon’s blacklisting of Anthropic was not just a domestic American story. It was the first live test of what happens when US national security policy collides with a global software supply chain that runs through American firms.
Claude was the first frontier model approved to run on the US military’s classified networks under a $200 million contract signed in July 2025. The Council on Foreign Relations has reported that Claude was used to support US and Israeli operations against Iran beginning the same week as the designation. The Pentagon wanted Anthropic to lift its acceptable-use restrictions on autonomous weapons and mass surveillance. The company refused. Hegseth retaliated by issuing the supply chain risk designation, a label historically reserved for foreign adversaries, and Trump ordered all federal agencies to cease using Claude within six months. Defence contractors, including those touching Boeing and Lockheed work, were instructed to certify they were not using the product.
The federal courts will eventually decide whether the designation was lawful. That is not the question that matters in Delhi. The question that matters in Delhi is whether the designation happened at all, and whether it happened to the most safety-conscious frontier company in the United States rather than to a Chinese rival. The CFR analysis of the case put it bluntly: as of today, no Chinese AI firm has been designated a supply chain risk by the US government. Only Anthropic has.
The UAE Footnote
There is a second piece of evidence that should worry anyone making infrastructure decisions in Delhi this year. Before dawn on 1 March, Iranian Shahed loitering munitions struck two AWS data centres in the UAE and damaged a third in Bahrain, the first confirmed wartime kinetic attack on hyperscale cloud infrastructure run by an American company. AWS told customers in the affected regions to migrate workloads. EC2, S3 and Lambda services in the ME-CENTRAL-1 region went offline for over twenty-four hours. Iranian state media subsequently published a list of “enemy technology infrastructure” that included Microsoft, Google and Oracle facilities.
Indian policymakers have taken comfort from the fact that most domestic data is now hosted in Indian regions of these clouds. That comfort is partial. The control plane, the patch pipeline, the firmware updates, the identity systems, and the legal entity that ultimately owns the building are all American. A workload physically resident in Hyderabad is still answerable to a control plane that can be reconfigured, restricted, or in extremis cut off, by an order issued in Washington.
What Sovereignty Should Actually Mean
The honest version of India’s AI sovereignty plan acknowledges that we will continue to depend on American compute, foundation models, and cloud control planes for at least a decade. The November 2025 US-India interim trade understanding, which explicitly references increased trade in GPUs and data centre goods, is the diplomatic admission that this dependency exists and has to be managed. What the Anthropic case makes urgent is reducing the surface area of the dependency where reduction is actually feasible.
Strategic compute reserves matter. The IndiaAI compute pool should be sized and located on the assumption that US providers may be restricted from serving critical Indian workloads at some point in the next five years, whether through export controls, sanctions enforcement, or contract termination. Indian-owned operators such as Yotta, E2E Networks, and Jio Platforms already host a meaningful share of the national pool. The policy question is whether the pool is sized for a worst case rather than a best case.
Open-weight models matter for the same reason. The fastest growing models on Hugging Face are now from Alibaba’s Qwen family, accounting for over 40 per cent of new derivatives. India does not have to like that fact to use it. A competent open-weight fallback running on Indian hardware is the difference between continuity and a six-month migration scramble.
Procurement contracts matter most of all. Every contract touching critical infrastructure that depends on a US-controlled AI vendor should specify a documented exit path and a tested fallback. Defence contractors learned this in March when they were given six months to unwind exposure to a vendor that was, until February, considered the gold standard. A clause is cheaper to negotiate than a migration is to execute.
The Bottom Line
Trust is sufficient for cooperation. It does not generate sovereignty. The infrastructure that runs the Indian economy now thinks in someone else’s language, on someone else’s chips, in someone else’s data centres, and is ultimately answerable to someone else’s law. The Anthropic case demonstrated that someone else can change the terms of access on six months’ notice. India can choose to call the current arrangement sovereign, or it can act on what the Pentagon has just shown about who actually holds the switchboard. It cannot do both.
( The Author studies Computer Science and Artificial Intelligence at Rutgers University, New Jersey, USA. He is interested in emerging technologies and innovation, and can be reached on LinkedIn at @arssh-kumar14)
Leave a comment