Stelia CTO David Hughes recently joined the AI Builders London ML meetup on March 5, speaking to a room of engineers and practitioners exploring the realities of building AI systems that move beyond prototypes and into production.
In his session, David drew on what he described as “tales from the engine room” – experience building and operating production systems across infrastructure, networking, and large-scale distributed environments – to reframe how teams should think about inference products in practice. He argued that while the model itself often represents only a fraction of the challenge, the surrounding infrastructure – and particularly data engineering – account for the majority of the work, with data quality ultimately determining the success or failure of the system – a reminder that “garbage in, garbage out” still applies, no matter how advanced the model.
A recurring theme throughout the evening was the gap between experimentation and real-world deployment – and how architectural decisions made early in the lifecycle compound significantly as systems scale across regions, users and cost constraints.
The evening also featured contributions from Andreas Kollegger of Neo4j alongside a series of community demos showcasing emerging tools and approaches across the AI ecosystem. The event brought together a strong community of builders focused on practical AI engineering.
Watch the full talk here:
The conversation covered:
- From prototyping to production inference systems: the gap between building demos and operating real-world AI products at scale.
- The “engine room” of AI systems: how infrastructure, data engineering and operational layers matter far more than the model itself.
- Why data engineering drives outcomes: how pipelines, quality, and transformation layers are often the true determinant of product success.
- Scaling challenges in production environments: navigating multi-region deployment, cost constraints, latency and the trade-offs of real-world operations.
Watch the full talk to explore David’s perspective on moving inference products from prototype to production.