A founder’s view on the next era of AI
Most industry observers still believe we’re in the early days of AI. They’re wrong.
The first era of artificial intelligence was a triumph of engineering. It was defined by breakthroughs in scale: GPUs got faster, models got bigger, projects like Stargate were launched. We built the infrastructure. We watched transformers reshape natural language, diffusion models redraw images and a totally new breed of AI-native products enter the mainstream. The energy of that period was electric. But it was also experimental, erratic and incomplete.
Today, the artificial tide is changing direction. We are entering the second era of AI and its shape is beginning to crystallise: not in academic labs or GPU clusters, but in enterprise conference rooms, legal firms, global supply chains, customer support teams, and the daily workflows of billions of people.
The first era of AI built the engine. The second era is about using it.
This moment marks a fundamental transition. And like any transition it comes with confusion and inertia. Many of the players who won in the first phase are now trying to force their way into continued relevance. But the centre of gravity has moved. The epicentre has shifted to application.
From Compute Wars to Value Wars
In the first era, value accrued to those who could build compute. Access to high-end GPUs became a competitive advantage. Entire business models formed around reselling compute and leasing hardware. (sounds similar to the early 2000s of bare metal compute…)
But here’s the uncomfortable truth: compute is not the moat. At best, it’s table stakes.
The AI industry is now saturated with infrastructure providers desperate to monetise their place in the stack. They speak the language of tokens and throughput, but these are engineering metrics, not user value. And while they focus on monetising bits and bandwidth, the real opportunity has evolved.
We are now witnessing a decoupling: those who control compute will not necessarily control the future. Value has shifted away from raw hardware and toward ease of use, access, and deployment at huge scale. AI is about making intelligence accessible: reliably, securely and in the right context.
The Second Phase of AI: Distribution at Scale
We are now in the “adoption era.” In this phase, the building is effectively done. The foundational breakthroughs required to build mass-market AI products have now occurred (research continues, but the core infrastructure exists). We’re no longer waiting on hardware; we’re waiting on distribution.
In this new landscape, the questions have changed: • The critical question: how well does it integrate into a 100,000-person enterprise? • Can it reduce legal review time by 40% and meet compliance requirements? • Can you scale it across 70 languages with consistent tone, brand, and results?
This is where the next generation of AI companies will win: in the trench warfare of deployment, serviceability and global distribution of intelligence.
And just as importantly, in usability.
Ease of use is not a feature; it’s the differentiator. Organisations want systems that augment their own, with minimal friction and maximum return. We’re seeing the rise of truly horizontal AI: core operational enhancements that drive measurable efficiency.
The era of experimentation is giving way to the era of execution.
Why the Old Guard Will Struggle
Who would be surprised by this prediction? The giants of Phase One who assume they’ll simply evolve into Phase Two. But history rarely works that way.
The incentives are misaligned. The largest compute providers (Nvidia partners, hyperscalers, GPU lessors) have built their business models around consumption primarily for a deeply technical niche audience, not value creation. They profit when you use more resources, not when you deliver better results. This creates a fundamental conflict: their growth is not your growth. Their scale is not your efficiency.
As enterprises begin to demand value instead of volume, these providers are being caught off-guard. They are structured to win hardware cycles, not workload loyalty.
We are already seeing cracks in their armour. Once-untouchable players are spinning up foundation model labs and launching APIs on top, hoping to capture enterprise spend. But APIs are not ecosystems, and tokens are not outcomes. Without deep understanding of user needs, without vertical integration, and without true control of the full stack, they risk becoming irrelevant middlemen in a fast-consolidating value chain.
While GPU providers remain a critical part of the AI ecosystem, powering the infrastructure that enables training and inference at scale, they have shifted from being the epicentre of innovation to a foundational layer in a much larger value stack. As the focus moves from raw compute to real-world utility, the differentiators are in how seamlessly intelligence can be deployed, adopted, and trusted. GPUs are essential but the future is decided elsewhere.
The Rise of Full-Stack Intelligence
In the second AI phase, the winners will be those who own the entire value chain: from silicon to sentiment. That means building systems, not just models.
This requires radical vertical integration and deep domain expertise. Control of the hardware. Custom foundation models. Scalable orchestration layers. And intuitive interfaces. All bound by a singular goal: make intelligence useable at scale.
This is where most AI startups fall short. They pick a piece of the puzzle (an LLM wrapper, a UX shell, an API filter) and try to build a business. Without full-stack control, they are at the mercy of upstream providers and downstream constraints. The result is fragmentation, not acceleration.
To thrive in this new world, you need to abstract the complexity. Hide the infrastructure. Deliver intelligence as a service: in the human sense, not the cloud computing sense.
And it must work across domains. Across cultures. Across languages. The next phase of AI is multicontextual. It’s about understanding what a user says, why they say it, when they say it, and how best to respond.
This is a systems challenge. And only a few players are positioned to solve it.
Why Stelia Already Has the Answer
At Stelia, we’ve been building for this moment for over 5 years, through our own deep research labs across multiple disciplines, we’ve centred ourselves around creating the technology stack truly distributed AI requires. A full-stack AI company: from GPU-level compute infrastructure, to in-house model training, to orchestration systems that serve hundreds of millions of users across sectors and geographies.
We didn’t chase hype cycles. We architected for scale. Our platforms were trained for the complexities of real-world deployment, not just benchmarks.
Where others scrambled to bolt on value, we built it into the foundation.
Stelia’s stack is an intelligence platform designed for mass adoption. From the kernel to the agentic interface, every component is optimised for enterprise use, regulatory clarity, and world-class experience.
This is Phase Two, purpose-built.
The artificial tides are changing and with them, the old paradigms are eroding. The future will be owned by those who can deliver the deepest impact.
At Stelia, we are the shift. And we’re just getting started.