Schmidt, super‑intelligence, and sovereignty
Last week former Google chief executive Eric Schmidt spent seventy‑five minutes on the Moonshots podcast with Peter Diamandis and Dave Blundin. The headline moment came when he declared that digital super‑intelligence is “within ten years.” The other take‑away: those future models, he said, will live in giga‑scale data centres guarded like nuclear stockpiles.
The remark has landed because policy circles from Washington to Brussels are already debating “sovereign AI”. If national power is set to hinge on who controls the biggest models, Schmidt’s comments sound like a strategic field manual. They also, as this article shows, rest on a shaky reading of how machine‑learning systems actually spread.
Schmidt’s vision in a nutshell
“If the structure of the world in five to ten years is ten models – five in the United States, three in China, two elsewhere – those data centres will be all nationalised in some way. In China they will be owned by the government. The stakes are too high.” (31 min 57 s – 32 min 28 s)
Schmidt’s argument has four main pillars:
- Ten‑year countdown
“When do you see what you define as digital super‑intelligence? – Within ten years.” (1 h 22 m 09 s) - Energy as the bottleneck
“The natural limit is electricity, not chips… The United States will need ninety‑two gigawatts more power.” (2 m 13 s – 3 m 46 s) - Nation‑scale data centres as strategic assets
A training run for xAI’s Grok used a “ten‑billion‑dollar super‑computer in one building.” (34 m 05 s – 34 m 34 s) - Deterrence through “mutual AI malfunction”
A state‑level balance of power where each side can crash the other’s mega‑centre if lines are crossed (25 m 56 s – 27 m 12 s).
In this frame sovereignty equals location: keep the decisive compute inside your borders and guard it with armed security.
Two AI races, one sovereignty problem
Eric Schmidt actually sketches two separate competitions.
Race 1: the fortress sprint
“These data centres will be nationalised in some way… the stakes are too high.” (31 min 57 s)
In other words, only a handful of states or mega‑firms can afford a gigawatt hall and a ten billion‑dollar training run.
Race 2: the diffusion scramble
Minutes later he warns that, once the weights are distilled, “the final brain can run on four or eight GPUs – a box about this size,” making powerful models “a hundred, a thousand, a million” in number (33 min 19 s; 41 min 50 s).
During the episode Dave Blundin recalls a visit to OpenAI where researcher Noam Brown (not on the podcast) predicted that models will soon write their own step‑by‑step workplans, or “scaffolding,” making multi‑hour tasks autonomous by 2025. If true, diffusion in Race 2 speeds up because the planning layer travels with the weights, not with the cloud provider.
Why this matters for ‘sovereign AI’
- Fortress sovereignty may work during Race 1 while the capability lives in guarded sites.
- Race 2 dissolves that physical anchor. Portable checkpoints flow across borders faster than regulators can stamp passports.
- Schmidt offers a deterrence doctrine for Race 1 but admits the diffusion phase is “a set of unknown questions.”
- Location‑based sovereignty will have little time to prove its worth before portable, self‑planning checkpoints flood every jurisdiction.
Stelia calls this the sovereignty paradox: legal control framed around location collapses at the very moment the technology scales to everyday hardware.
Three gaps in the Schmidt approach
1. Compute does not equal control
Schmidt’s fortress protects the factory, not the product. Moments after describing armed guards he concedes that the same model, once distilled, can run on a desktop server:
“The final brain can be ported and run on four or eight GPUs – a box about this size.” (41 m 50 s – 42 m 25 s)
When weights leave the building they lose all jurisdictional tags. The link between physical residency and real control vanishes.
2. Deterrence needs a target – what if there is none?
Nuclear doctrine worked because silos were fixed on maps. Schmidt’s “mutual AI malfunction” assumes similar visibility. Yet portable checkpoints allow powerful models to live on laptops, research clusters, open‑source mirrors and even criminal botnets. A strategy that threatens single buildings cannot restrain actors who own none.
3. Whose sovereignty counts?
The Schmidt narrative is Washington versus Beijing. Missing are Indigenous communities whose language archives feed multilingual models, European citizens protected by GDPR, or African start‑ups seeking local agency. National control over hardware does not answer their claims to participation and benefit.
A governance path that could work
Stelia’s engineers operate multi‑jurisdiction deployments every day. Three design principles align with technical reality:
Principle | Practical measure |
---|---|
Residency first | Keep raw data and first‑training runs inside legally recognised regions using geo‑fenced compute and region‑tied encryption keys. |
Transparency of weights | Publish weight hashes and lineage metadata so any checkpoint can be traced, wherever it runs. Add inference watermarks for audit. |
Participation hooks | Build consent, veto rights and benefit‑sharing into dataset pipelines so communities influence use before models go live. |
These tools do not pretend to project domestic law across borders. Instead they create verifiable constraints that travel with the artefact itself – a sharper fit for how machine‑learning actually works.
Why this matters now
With Race 1 already under way and Race 2 close behind, the window for workable governance is short.
“We are saying it is 1938… we need to start the conversation now, well before the Chernobyl events.” (28 m 20 s – 28 m 41 s)
If policymakers chase the mirage of fortress sovereignty, genuine levers of control will ossify behind optics and compliance theatre. Residency, transparency and participation offer a route that communities can verify and industry can implement.
Stelia will publish a technical white paper on data sovereignty later this month.
Source: Moonshots with Eric Schmidt (YouTube, June 2025): Ex-Google CEO: What Artificial Superintelligence Will Actually Look Like w/ Eric Schmidt & Dave B