Last week, Stelia took part in FOSDEM 2026, where CTO David Hughes and Principal Engineer Lukas Stockner were invited to present a deep-dive session on Building Cloud Infrastructure for AI.
Speaking to a packed room full of the open-source community, David and Lukas explored what truly defines a “GPU cloud” beyond the hype – unpacking how requirements differ fundamentally from traditional cloud infrastructure and what it takes to to build production-grade cloud infrastructure for AI workloads in reality.
Their session spanned the full AI stack, from hardware and firmware choices through to networking architecture, storage design, orchestration and virtualisation. Drawing on real-life experience, Stelia’s engineering team highlighted the often-overlooked trade-offs between performance and reliability, and the key design choices that ultimately shape how well systems perform in real-world AI environments.
As active and proud contributors to the open-source community, it was great to speak at Fosdem. The event continues to bring together some of the most insightful and technically leading voices in the industry – with 2026 being no exception.
Watch the full talk here:
The conversation covered:
- Cutting through the “GPU cloud” hype: why many offerings are simply HPC-style cluster deployments in disguise, and what distinguishes a true cloud architecture for AI.
- The reality of performance vs reliability: why “fast enough” infrastructure often matters more than chasing maximum speed, and how this impacts storage, networking and overall system design.
- How to build your own AI cloud: a practical, end-to-end view of constructing GPU infrastructure using open source technology.
- Building across the full AI infrastructure stack: connecting hardware, networking, storage, and managed services into a cohesive system capable of supporting scalable, production-grade AI applications.
Watch the full talk to explore how Stelia approaches building scalable, open, and production-ready AI infrastructure.