Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

OpenClaw, frontier models, and the enterprise foundations that matter

Stelia’s VP of Applied AI discusses the developments driving the market right now and where enterprise attention should be.

Stelia is driven by the expertise of its engineers, architects, and applied AI specialists; the people whose thinking underpins everything we build. So we’re bringing those people forward. With that in mind, we recently spoke with Paul Heathcote, Stelia’s VP of Applied AI, for a candid conversation about the state of the market: where the genuine developments are, where the noise is, and what enterprises should be doing now to position themselves to scale with confidence.

We started with the big picture, asking Paul his read on the applied AI market and where the most innovative developments are currently.


What areas of applied AI are you finding particularly exciting right now?

Just a small question! Well, there are a few things. I think firstly, there’s this continuing development of model capabilities, whether its new releases of some of the frontier models with the likes of Google and Anthropic competing with each other; Gemini 3 was the best, then Anthropic just launched their latest model, and now Google just launched Gemini 3.1 as a preview. So there is a huge amount of development there.

But also, in the open-source model space, there’s not a week that goes by that there isn’t another open-source model that gets released, which is keeping the pressure on the frontier models. So from that point of view, I think we’re just seeing more and more capabilities evolving, which are getting better and better.

I think the other thing is, it’s been very interesting to see all of the hype around what is currently called OpenClaw, but was Clawdbot, and then it was Moltbot. And that seems to have brought attention to this idea of having locally deployed models and agents that can start to act on your behalf; albeit it has shone a light on a lot of security concerns and risks as people have had their accounts compromised, and API keys have been stolen.

But the utility of it is obviously significant because so many people have jumped on board using it. What it shows is that kind of solution really has potential as long as it’s properly secured, is governed, and adheres to the policies that enterprises will need. It’s often hobbyists and enthusiasts that are early adopters of these things, taking all the risks and highlighting what the value might be, but also what are the aspects that need addressing and protecting. I certainly wouldn’t advise any enterprise to start deploying OpenClaw this week, but I think it shows that those kinds of solutions have got potential and could add a lot of value if they’re properly designed and implemented securely.


OpenClaw is a free, open-source autonomous AI agent that runs locally on a user’s machine, integrating with LLMs such as Claude or ChatGPT and accessible via messaging services like WhatsApp and Telegram. As Paul notes, its rapid uptake among hobbyists has been matched by equally rapid exposure of its risks, most notably when a Meta AI security researcher found her inbox mass-deleted after granting her OpenClaw agent access to triage it.

To Paul’s point, while the utility of these capabilities is evident, the governance, security, and enterprise readiness required to deploy them responsibly is not. With that in mind, we asked Paul where organisations should actually be directing their attention right now.


For enterprises that want to be ready to scale AI capabilities within their organisations, where should they be starting, and what does “being ready” actually look like in practice?

With AI solutions, it is very easy to get something up and running that showcases a use case and is generally a happy path. If all the data is perfect, and a model is set up in its perfect operational environment, you can get it to execute something quite impressive that stakeholders will see and think, “wow, that’s amazing, let’s roll it out”. But to deal with the realities of data in an enterprise and systems that can become unavailable or be slow, or any other number of variables, you need a solid foundation to build these AI solutions on top of. With a solid foundation you can manage the realities of deploying a production system, whereby you design it and build it defensively so that it doesn’t crash or perform undesirably when it comes into contact with a set of circumstances or data that doesn’t look perfect.

Unfortunately, that doesn’t look very glamorous when you’re doing a demo to senior stakeholders in a business, because in reality, it looks like lots of monitoring and logging and lots of other technical details. But it’s getting the foundations right and investing in those foundations that will enable you to scale faster and in a way that can cope with everything you need to in a production environment.

You’ve got to slow down to go faster, and get away from this mindset of continuously piloting and then immediately failing when you try to scale because there is none of that capability. Think about what you need to put in place foundationally to build on top of so that when you aren’t in the perfect situation, and timeouts happen, or systems are unavailable, or there are data integrity problems, or systems change, or external APIs go down, or whatever it might be, the whole thing doesn’t crash. Which, as I said, isn’t particularly glamorous, but it’s necessary.

It’s a little further from where the hype is, but it’s where all the value is.


This is a perspective that sits at the centre of how Stelia approaches every engagement – beginning not with the model, but with the architecture that supports it; designing systems that are resilient by default, observable in production, and built to scale as requirements evolve.

Our conversation with Paul covered considerably more ground –  on the use of small language models, the importance of selecting the right model for the right job, and what agentic AI actually means in an enterprise context. In the next instalment of this series, we explore these areas in further detail, including why enterprises may also need to rethink how they set expectations for AI systems and why assumptions carried over from traditional software deployments can be a source of misaligned expectations.

Enterprise AI 2025 Report