In February 2025, European leaders announced InvestAI, a €200 billion initiative to build AI “gigafactories” across the continent. Commission President Ursula von der Leyen unveiled the programme on February 11th, describing it as creating “unprecedented capital through InvestAI for European AI gigafactories” and “a unique public-private partnership, akin to a CERN for AI.” France pledged €109 billion in private investment, whilst €20 billion was allocated specifically for four gigafactories housing approximately 100,000 last-generation AI chips. The rhetoric was stirring: technological sovereignty through infrastructure ownership, democratic oversight through geographical control, European values embedded in European silicon.
This represents one of the most expensive category errors in the history of technology policy.
The fundamental premise underlying Europe’s gigafactory strategy (that controlling physical infrastructure translates to meaningful governance of AI systems) reflects a profound misunderstanding of where power actually resides in algorithmic societies. This creates what we term the “sovereignty paradox”: the more sophisticated AI systems become, the less meaningful traditional territorial control mechanisms prove to be. Distributed AI systems resist the centralised monitoring and control that territorial governance assumes, whilst AI models exist as mathematical objects that can be instantly copied and deployed anywhere without degradation.
Recent events underscore this disconnect perfectly. When German authorities moved to block the DeepSeek AI app for unlawful data transfers to China, they targeted app store distribution rather than the underlying AI model itself. The mathematical weights powering DeepSeek’s capabilities remain unchanged and deployable through countless other channels, whilst enforcement focuses on controlling access points rather than algorithmic behaviour. This illustrates precisely how territorial governance mechanisms prove inadequate for governing distributed AI systems.
The Mathematical Absurdity of Territorial Control
The gigafactory approach rests on a fundamental category error about the nature of AI models themselves. Once training completes, an AI system exists as nothing more than a specific configuration of numerical parameters: billions or trillions of floating-point numbers arranged in precise mathematical relationships. These weights carry no inherent geographical identity, jurisdictional markers, or territorial constraints.
Consider the mathematical reality: a neural network trained on European data using European computational resources becomes functionally indistinguishable from an identical configuration of weights derived through any other process. The model’s “European-ness” exists only in its provenance metadata, not in its mathematical structure or functional characteristics. Two models with identical weights will produce identical outputs regardless of where they were trained, what data was used, or which regulations governed their creation.
This mathematical universality renders territorial control mechanisms absurd. A healthcare AI model trained in a European gigafactory using data from EU citizens under EU regulations can be perfectly replicated by simply copying its numerical weights. Once copied, those weights can be embedded in medical devices sold globally, integrated into healthcare systems operating under entirely different regulatory frameworks, or deployed on consumer smartphones without any technical mechanism for maintaining European governance.
The copying process itself reveals the deeper absurdity. Unlike physical goods that degrade through reproduction, mathematical objects can be perfectly duplicated infinitely without loss of fidelity. A single European-trained model can spawn millions of identical copies operating simultaneously across every jurisdiction on Earth, each mathematically indistinguishable from the “sovereign” original yet entirely beyond European control.
Distribution Trends and Governance Fragmentation
Whilst European policymakers focus on centralised training infrastructure, compelling technical and economic forces are driving AI inference towards distributed edge deployment. Real-time applications cannot tolerate the latency inherent in cloud-based inference: autonomous vehicles cannot wait 200 milliseconds for cloud processing during emergency braking; medical devices monitoring cardiac rhythm cannot pause for network connectivity during critical moments; industrial control systems require local processing capability that operates independently of network infrastructure and regulatory oversight.
Privacy requirements accelerate this distribution trend. Healthcare AI processing sensitive patient data locally avoids both regulatory complexity and data transmission risks. Financial algorithms running on consumer devices eliminate the legal and technical vulnerabilities associated with transmitting sensitive information to centralised processing facilities. Industrial systems processing proprietary operational data avoid competitive intelligence risks by keeping computation local.
The economics further reinforce distribution. Edge inference eliminates ongoing cloud computing costs, reduces bandwidth requirements, and enables offline operation. The global edge AI market, valued at $8.2 billion in 2024 and projected to reach $55.6 billion by 2030, reflects genuine economic value creation rather than regulatory compliance theatre.
These technical and economic forces create governance fragmentation that strikes at the heart of the gigafactory model. A single AI model trained in a European facility might simultaneously operate on German automobiles, French medical devices, Italian manufacturing equipment, and American smartphones, each subject to different regulatory regimes with no technical mechanism for unified governance. The temporal dimension intensifies this challenge: when AI models make split-second decisions on mobile devices, traditional accountability mechanisms prove inadequate for computational events occurring too quickly and too locally for meaningful oversight using territorial governance frameworks.
Early indications of ephemeral agent networks and autonomous agent-to-agent frameworks suggest even more challenging governance scenarios ahead. As AI systems evolve towards autonomous coordination patterns, these temporal and jurisdictional challenges will intensify exponentially.
The Sovereignty Paradox in Practice
At the architectural level, distributed systems resist the centralised monitoring and control mechanisms that territorial governance assumes. Edge AI operates through peer-to-peer networks, mesh topologies, and intermittently connected devices that collectively create computational capabilities without central coordination. Emerging agent-to-agent frameworks suggest even more autonomous coordination patterns ahead. These emergent system properties cannot be governed through control over any specific infrastructure component, including the facilities where models were originally trained.
The sovereignty paradox thus reveals a category error at the heart of European AI policy: attempting to govern inherently global, distributed, and stateless systems through territorial control mechanisms designed for physical, localised, and persistent objects. The €200 billion gigafactory investment doubles down on this error by strengthening the least relevant aspect of AI governance (control over training location) whilst ignoring the most consequential challenge (governance of distributed deployment).
Opportunity Cost and Alternative Approaches
Perhaps the most damaging aspect of the gigafactory approach lies not in what it attempts to achieve, but in what it prevents. The €200 billion investment represents resources that could fund governance innovations addressing the genuine challenges of democratic accountability in algorithmic societies.
Consider what €200 billion could accomplish if directed towards governance capability rather than training infrastructure: comprehensive legal frameworks for algorithmic transparency; technical infrastructure for citizen participation in AI development; international institutions for democratic AI governance; research and development for privacy-preserving collaborative AI; community capacity building for algorithmic accountability.
These investments would address the actual governance challenges posed by AI systems: ensuring algorithmic decisions can be explained and contested; enabling meaningful citizen participation in the development of systems that shape their lives; creating accountability mechanisms that work across jurisdictional boundaries; building technical infrastructure that embeds democratic values into system architecture.
Technical transparency mechanisms can embed governance properties directly into AI models through cryptographic attestation, making compliance verifiable regardless of deployment context. Rather than relying on territorial control over training facilities, these approaches create technical constraints that ensure algorithmic behaviour aligns with democratic values wherever deployment occurs. Participatory governance mechanisms can enable meaningful citizen input into algorithmic system design without requiring territorial control over computational infrastructure.
This misallocation occurs precisely when the window for governance innovation remains open. AI systems are still developing; technical standards remain fluid; international norms are still forming. European leadership in governance innovation could establish frameworks that other democracies adopt, creating global infrastructure for algorithmic accountability.
Beyond the Gigafactory Delusion
The gigafactory delusion represents more than misguided technology policy; it embodies a fundamental misunderstanding of power in algorithmic societies. By investing €200 billion in training infrastructure that provides no meaningful governance capability over distributed deployment, European leaders reveal dangerous confusion about where control actually resides in AI systems.
The technical realities of distributed AI systems, and the emerging prospect of ephemeral agent networks operating beyond traditional oversight mechanisms, demand nothing less than a fundamental reimagining of how democratic societies govern algorithmic power. The gigafactory approach represents exactly the wrong answer to exactly the right question.
Democratic societies deserve better: governance frameworks that enhance rather than undermine democratic accountability, investment priorities that strengthen rather than weaken citizen agency, and technology policies that serve democratic values rather than political theatre. Recognition of this error creates opportunity for approaches that could actually work if democratic societies have the intellectual courage to pursue them.
About Stelia
Stelia develops AI platforms that embed democratic accountability into technical architecture, proving that transparency and cooperation deliver superior outcomes to territorial control. Our work demonstrates that governance wisdom combined with technical authority can create AI systems that serve democratic values through design rather than oversight: exactly what distributed algorithmic societies require.