As organisations move beyond early experimentation with AI, a new set of strategic questions begins to emerge.
Should enterprises develop their own AI capabilities internally, or partner with specialists? How should they think about choosing between different models? And in a market evolving this quickly, how do you avoid locking yourself into technology that could soon be outdated?
In the latest instalment of our ‘in conversation’ series, our VP of Applied AI, Paul Heathcote, shares how organisations should approach the build-versus-partner question, and why flexibility is becoming one of the most important design principles in enterprise AI.
When it comes to applied AI, where do you see enterprises getting the most value from building their own capability versus partnering with specialists?
Flexibility matters more than committing to a single AI provider.
It really depends on what you’re partnering for.
Right now, I’d be cautious about getting locked into a single stack or set of solutions. The pace of development in AI is extraordinary. Capabilities are evolving constantly, and what looks like the right choice today might not be the best option even a few months from now.
If an organisation signs a long-term deal with a particular model provider today, they could quickly find themselves constrained if something significantly better launches later. Switching might then become difficult or expensive.
Because of that, flexibility is incredibly important.
There are some emerging standards around how services interact, but there’s still a lot of variability between providers. That means organisations need to think carefully about their integration strategy, particularly the middleware layer that allows them to work with different models and services without tightly coupling their systems to a single provider.
Designing that flexibility in early makes it much easier to take advantage of improvements in the ecosystem as they happen.
How should organisations think about evaluating AI partners and skills?
The ability to adapt quickly matters more than long-term experience with any one tool.
One of the hard realities of applied AI is that it’s still a very new field.
There simply aren’t many people in the market who have been building large language model-powered applications for years and years, because the technology hasn’t existed for that long.
That means organisations need to think slightly differently about how they evaluate partners and skills.
Of course experience matters, but it’s less about whether someone has spent years working with one specific model or tool. What matters just as much is whether they’ve shown they can learn quickly, adapt to new technologies, and turn emerging innovations into real business value.
In a space where new models and techniques are appearing constantly, the ability to learn and evolve quickly is incredibly important.
Are we at the point where industry-specific AI models outperform general-purpose ones for enterprise use cases?
Picking the right AI model is about fit, not category.
It’s less about industry-specific versus general-purpose models and more about choosing the right model for the job.
The first question is: how complex is the task you’re trying to solve? Are there multiple levels of reasoning required?
If it’s relatively straightforward, things like summarising documents or handling simple transformations, smaller language models can often do that very effectively. They’re typically far more cost-efficient, and there are plenty of open-source options that are perfectly capable of handling those kinds of tasks.
The next step is thinking about fine-tuned models.
If an organisation has proprietary data that contains patterns not present in publicly available datasets, there’s a real opportunity to fine-tune a model using that data. That can create a capability that becomes a unique asset for the business, because it embeds knowledge that only that organisation has.
But there are also areas that are extremely general-purpose where building your own model rarely makes sense.
Coding is a good example. There’s so much open-source code available that frontier models trained on those datasets are incredibly strong especially around languages that are very common use, like Python or React. For most organisations, it would be almost impossible to compete with that level of scale.
In those cases, it’s usually far more practical to use those capabilities rather than trying to recreate them. But, back to my previous point, you need to make sure you do that in a way that you can switch horses if one of them ends up being able to run faster later on, and not getting locked in.
How should enterprises think about structuring their overall AI model strategy?
Most enterprise AI strategies end up combining three different model approaches.
A useful way to think about it is in three categories.
First, there are situations where smaller language models, or even pruned versions of larger models, are perfectly sufficient for a very specific task. Those can often deliver the capability you need in a very efficient and cost-effective way.
Second, there are opportunities to fine-tune models using proprietary data. That allows organisations to create AI capabilities that are tailored to their specific environment and processes.
And third, there are the frontier models that provide extremely powerful general-purpose capabilities. In many cases, it makes more sense simply to consume those services rather than trying to replicate them internally.
There may be some situations where building a model from scratch is justified — particularly if an organisation has a very large amount of proprietary data and expects long-term demand for that capability.
But the reality is that training large models is extremely expensive. For most organisations, it would need to be something genuinely unique and compelling for that investment to make sense.
Ultimately, successful AI strategies are less about committing to a single model, platform, or provider and more about designing systems that can evolve.
The organisations that get the most value from AI won’t necessarily be the ones that pick the “perfect” model today. They’ll be the ones that build the flexibility to take advantage of the next breakthrough when it arrives.
In the next instalment of the series, our VP of Applied AI will explore one of the most talked-about developments in the space today: agentic AI, separating the hype from the reality and what enterprises should realistically expect from these systems.