GTC 2025 was a blast, and bigger than ever before. NVIDIA put a lot of time and effort into preparing San Jose for it, and the massive attendance numbers reflected the industry’s growing recognition that AI is no longer just an emerging technology—it’s becoming fundamental infrastructure.
Key Observations
NVIDIA appears to have recalibrated their product release approach. In stark contrast to last year’s “more, more and more, and now” theme, this year emphasized a more measured cadence. The focus has clearly shifted toward:
- 1. Efficiency over acceleration: Getting more performance from existing deployments rather than constantly pushing for new hardware rollouts
- 2. Resource optimization: Maximizing the value of already deployed assets instead of relying on newly deployed infrastructure
- 3. Maturation over novelty: Building stable, reliable platforms that can support enterprise-grade workloads
This strategic pivot was evident not just in Jensen’s keynote but across the ecosystem. OEMs were pushing for greater efficiency, while storage and data platform vendors were demonstrating increasingly intelligent, performant, and reliable solutions. The entire industry seems to be preparing for the next wave of AI — widespread enterprise adoption.
The Enterprise Expectation Gap
Enterprise customers are fundamentally different from early AI adopters. Where pioneers might tolerate instability for cutting-edge capabilities, enterprise expects rock-solid reliability as a foundational business requirement. These organizations are looking for:
- Mature technology stacks (both software and hardware)
- Familiar consumption models that mirror the cloud computing paradigm they’ve adapted to over the past decades
- Predictable performance and costs
- Compliance-ready infrastructure with robust security
We would be foolish not to recognize this reality. The enterprise wave will drive massive scale for everyone in the ecosystem, but only if we mature our technologies and build platforms that meet enterprise expectations.
Beyond Data Transfer: True Data Mobility
My presentation at GTC focused on a critical enabler for enterprise AI adoption: petabit-scale data mobility. This goes beyond simply moving data—it’s about creating an environment where data flows effortlessly throughout the AI lifecycle, from training to inference to feedback loops.
This requires a fundamentally different approach to cloud architecture:
– A well-designed, globally distributed elastic platform
– Elimination of punitive data transfer fees that create artificial barriers
– Properly designed availability zones and regions
– Connectivity via a global backbone that enables true data mobility
This infrastructure foundation will catalyse mass consumption and adoption of AI in the enterprise segment, creating value across the ecosystem.
Key Takeaways
Cloud Computing is dead—long live Cloud Computing!
The first generation of cloud computing wasn’t designed for the dynamic, data-intensive nature of AI workloads. We need Cloud Computing 2.0 (reminiscent of the Web 2.0 evolution) that’s built from the ground up for distributed AI.
We also must extract maximum value from already-deployed resources. These assets will likely persist through multiple ownership cycles, eventually delivering value to the broader market that desperately needs quality compute at accessible price points.
The winners in this next phase won’t be those with the most impressive hardware specs, but those who build the most efficient, enterprise-ready platforms that make distributed AI accessible, manageable, and cost-effective for mainstream adoption.
The full article describing the presentation is here and the presentation deck is below: