Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

The AI tooling landscape, vendor ecosystems, and why flexibility has to come first

Stelia’s VP of Software Engineering discusses the importance of flexibility as the AI tooling market continues to evolve at pace.

The AI development tooling market is in a constant state of flux. Tools are rising to prominence and being displaced in a matter of months, not years, as engineering teams continuously reassess what best fits their needs. Claude Code, for instance, has become one of the most talked-about AI coding tools in the industry, rapidly closing ground on GitHub Copilot and Cursor – tools that themselves only reached prominence in the last two years. New frameworks, libraries, and capabilities are emerging at a pace that shows no sign of slowing.

While this creates significant opportunity, it also introduces a new kind of complexity. One that many teams underestimate. The challenge is no longer access to capability, but the ability to evaluate it effectively and make decisions that hold up as the landscape shifts. As new releases arrive at a near-constant rate, the ability to distinguish genuine advancements from the noise has become as important as the technical decisions that follow.

With that in mind, we asked James Dobson, our VP of Software Engineering, who navigates this landscape daily, what advice he would give development teams when assessing a new tool or framework in this context.


Understand your options

You absolutely need to consider a lot of alternatives. Don’t fall into the fallacy trap of having a favourite tool that becomes the only one you want to use.

In practice, the biggest failure mode we see is teams committing too early to a single tool and optimising around it, rather than evaluating the problem space properly

Be genuinely objective about the framework you’re looking at and what the pros and cons of each tool selection made actually are.

And be vigilant. Every feature comes with a downside -whether that’s latency, cost, reduced control, or constraints on how you structure your system. There will always be something that is expected to work in a certain way, or might not fit a particular use case very well, so keeping an open mind about tool selection is incredibly important.

Be aware of vendor ecosystems

It is also critical to consider vendor lock-in, too. The reality is that most teams underestimate lock-in risk until they need to switch, at which point the cost is already embedded in their architecture.

So many tools nowadays push their customers into a particular way of doing things to keep them inside their ecosystem. Be aware of vendor lock-in and ask yourself where the lock-in risk lies if you choose a certain application.

Because, as we’ve all seen, third parties can go down; take Cloudflare, or recently AWS. So it’s important to keep in mind the ability to switch your software and the composability to use different vendors when necessary.

Having abstractions in your code facilitates switching between different implementations and is one of the key aspects of how we develop at Stelia: by building interfaces. If we were using a mail client, for example, and there was a fairly obvious one to use, we would still build our own interface around it, with the understanding that if we ever decided to switch vendors, it would be much easier to do so from a programming and library perspective. It becomes a contained change, rather than a costly system-wide rewrite.


In the current AI development landscape, that kind of forward thinking, building with the assumption that your requirements will change, is what separates teams that scale well from those that find themselves constrained when a better option emerges. In a market moving this quickly, the cost of inflexibility compounds fast.

This raises a more fundamental question for engineering teams: with so much in the market, is the explosion of tooling actually accelerating AI development, or is it slowing it down?


Yes, and it can narrow the experience developers get

The explosion of tooling options can slow teams down — and more subtly, it can narrow the experience developers build over time.

There is a particular trend among the large LLM providers – OpenAI, Anthropic and others – whereby they are all actively building developer ecosystems designed to lock teams into their services.

The issue there is that you will have a very different experience with different types of LLMs, depending on the use case or even the vendor. Having the flexibility to move between different providers, or even different models within the same provider, to see that difference in results is where the real advantage lies.

Building for model agnosticism

The practical response to this is building for model agnosticism from the outset.

At Stelia, we’ve built an agnostic system around this, with the ability to switch, even within the same session, to a different model. Just by specifying the model name, our system understands which model to use within that session. It’s even possible to run multiple models within the same conversational window, with three or four contributing and responding to one another, each handling the tasks they’re best suited for. Those building on top of our platform won’t require any form of specialised software to do this.

In order to remove such a core flexibility challenge for engineering teams, we build a routing layer that interprets model selection, a unified context strategy that works across providers with meaningfully different API conventions, and normalised input and output handling so responses from one model can be cleanly passed as inputs to another. This is the kind of engineering work we believe all teams should be taking seriously.

The reason is straightforward. If switching models requires more than a configuration change, you’ve already introduced friction that will compound over time.

The complexity of so many offerings in the market has effectively become the lock-in itself. With so many providers competing for the same space, they are all trying to keep their customers inside their ecosystem. We’ve made it possible to utilise only those components that make sense.


In a market where every provider is competing for permanence in your stack, the ability to remain genuinely flexible is fast becoming one of the most valuable capabilities a development team can have. Understanding this, and building for it from the start, is increasingly what separates the teams that scale with the market from those constrained by it.

Ultimately, the primary architectural risk is not choosing the wrong model; it’s designing a system that can’t adapt when a better one appears.

Enterprise AI 2025 Report