Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Why disappointment is fuelling the next wave of AI litigation

AI procurement is entering a new phase, and legal teams must shift focus from contracts alone to verification, literacy, and ongoing oversight.

As AI capabilities move from experimental pilots to business-critical systems, procurement is entering a new phase. With this shift comes a less visible but equally significant change, as the risk landscape for supplier disputes is evolving in parallel.

The next wave of AI-related litigation won’t primarily stem from deliberate wrongdoing or catastrophic system failure. It will emerge more subtly through buyer disappointment as performance, explainability, or compliance increasingly fall short of expectations.

This is litigation born from misalignment, where what was sold diverges from what was delivered, and where accountability remains unclear until it’s tested in court.

Despite the weight your legal team might (fairly!) attribute to a detailed contract, this alone won’t avoid future disappointment and conflict. Prevention is better than cure, and it lies in sharper questioning at the outset; understanding what you’re actually buying, not just what marketing material suggests – and selecting suppliers who maintain control over their full stack and can answer the difficult questions with specifics.

This article will identify the common sources of this disappointment gap and consider why addressing these factors now can prevent misalignment and consequent litigation in the future.

The roots of disappointment

As the industry moves at pace, understandably, many organisations seeking to capitalise on developments lack the AI literacy to distinguish between aspirational capability and actual performance, and suppliers themselves do not always fully control or understand the components within their own solutions, inevitably leading to misunderstandings that solidify into claims when expectations meet reality.

Some of the common causes of AI-related litigation, beyond traditional breach claims, stem from:

Performance misalignment

The problem: The most common source of AI-related disputes is the gap between what was marketed and what gets delivered in practice.

Vendors tend to use sweeping language, “fully compliant”, “bias-free”, “fully autonomous”, which often don’t hold up operationally. Systems marketed as automated may require consistent human oversight, and tools described as unbiased may be trained on data sets which are anything but.

Equally, performance can shift after deployment. Models are frequently retrained or updated, and this can cause unexpected behaviour or accuracy decline.

All of this becomes more difficult to address when contracts lack clarity on what “acceptable performance” actually means. Without objective standards in the specs, disappointment inevitably follows.

Why it leads to litigation: Misrepresentation, even if unintentional, can trigger claims of inducement or rescission. Performance drift can lead to disputes over service levels, implied warranties, or, in some cases, even breach of contract. And when output quality falls short but the contract offers no measurable standard, disappointment itself can develop into a legal claim.

Prevention:

  • Request empirical validation data and independent test results before contracting.
  • Require contractual notice obligations for model updates, with clear product version documentation.
  • Establish agreed benchmarks for acceptable drift and define what constitutes material degradation.
  • Define objective performance metrics upfront and avoid open-ended warranties such as “the system will deliver accurate insights” where “accurate” remains undefined.

Data and IP risk

The problem: Data and IP represent critical assets in AI operations, yet their provenance and handling can remain opaque until problems emerge.

Suppliers may incorporate open-source models, datasets, or code with unclear licensing or uncertain provenance, resulting in the buyer discovering too late that they face downstream IP infringement risks, or that they don’t actually own the outputs their system generates.

Equally, privacy failures can follow a similar pattern. There’s a risk of AI systems processing personal data in ways not disclosed during procurement or that violate UK GDPR and other data privacy regulations. Buyers may assume compliance from the outset, only to discover post-deployment that data flows were never properly mapped, that processing lacks legal basis or that cross-border transfers weren’t addressed, but by then, the exposure is live.

These risks are materialising across industries, with high-profile disputes – from the New York Times’ lawsuit against OpenAI to concerns over voice synthesis and image rights – illustrating how quickly these uncertainties translate into legal exposure.

Why it leads to litigation: IP ambiguity exposes buyers to third-party infringement claims or disputes over ownership of valuable outputs. Privacy breaches can trigger regulatory fines, lead to reputational damage, and cause litigation from affected individuals or customers. In both cases, the buyer’s defence, “we relied on the supplier’s assurances”, rarely provides adequate protection.

Prevention:

  • Demand full disclosure of data sources, including provenance of training data and licensing terms for all third-party components.
  • Require AI-specific IP indemnities that go beyond standard third-party IP clauses to address model outputs and generated content.
  • Request Data Protection Impact Assessments (DPIAs) already prepared by the supplier, or require input into your own assessment process.
  • Obtain transparency on data flow diagrams, including where data is processed, stored, and whether cross-border transfers occur.

Explainability & accountability

The problem: When AI systems make consequential or high-risk decisions, such as rejecting loan applications, filtering job candidates, or allocating resources, the inability to explain how those decisions were reached creates both operational and regulatory risk.

Many AI systems, particularly complex neural networks, function as “black boxes”. Suppliers are often unable to articulate why the model produced a specific output, which factors carried the most weight, or whether the reasoning aligns with the buyer’s policies or legal obligations. This becomes acutely problematic when decisions affect individuals with a right to explanation, or when regulators demand accountability.

Bias and fairness issues compound this. Buyers often assume AI systems are “neutral”, only to discover post-deployment that outputs demonstrate patterns of unfair treatment. Without explainability, it becomes nearly impossible to diagnose the source of bias or demonstrate that adequate mitigation measures were taken.

Why it leads to litigation: Lack of explainability can constitute a breach of contract when buyers reasonably expected transparency. It also creates regulatory exposure under emerging AI governance frameworks, including the EU AI Act and principles outlined in the UK’s 2023 AI white paper. Discrimination claims arising from biased outputs carry reputational damage, regulatory scrutiny and direct liability, all made worse when the system’s logic cannot be examined or defended.

Prevention:

  • Require suppliers to demonstrate explainability capabilities appropriate to the use case: what level of interpretability can they provide?
  • Ask for documentation of fairness testing methods and bias mitigation strategies already implemented.
  • Include contractual audit rights that allow independent review of model logic and decision-making processes.
  • Establish clear accountability protocols: who is responsible when the AI produces a harmful or incorrect output?

Sustainability & ESG obligations

The problem: AI systems, particularly large-scale models, consume significant energy and computational resources, a reality that increasingly conflicts with corporate sustainability commitments and regulatory reporting requirements.

Buyers with ESG reporting obligations may face scrutiny from regulators, investors, or stakeholders for deploying energy-intensive AI without adequate due diligence. Yet many suppliers cannot, or will not, quantify the carbon footprint or energy consumption of their systems. Training runs, inference at scale, and ongoing model updates all carry environmental costs that remain invisible during procurement.

The disappointment arises when stakeholders allege greenwashing or inadequate oversight in technology selection. And what seemed like a straightforward software purchase becomes a compliance gap in sustainability reporting.

Why it leads to litigation: While direct litigation on AI sustainability remains emerging, the risk manifests through regulatory enforcement of ESG disclosure requirements, investor challenges, and reputational damage that triggers commercial disputes. As frameworks tighten, particularly in the EU, failure to account for AI’s environmental impact may constitute material omission in corporate reporting.

Prevention:

  • Require suppliers to quantify the energy and carbon impact of their AI systems.
  • Include AI compute disclosures in ESG reporting frameworks.
  • Favour suppliers who use right-sized models appropriate to the task, rather than over-engineered solutions.
  • Establish contractual obligations for ongoing reporting of energy consumption as the system scales.

Building litigation resilience into your AI supply chain

The common thread across these scenarios is clear; misalignment perpetually stems from questions not asked early enough. Procurement teams that are treating AI acquisition the same as standard technology purchases, relying on vendor assurances without verification, are welcoming in the disputes that follow. Asking sharper questions and pushing specifics is just the beginning, but as AI becomes business-critical a fundamental shift is required in how organisations approach suppliers.

Building litigation resilience means embedding new practices across the procurement lifecycle:

Treat AI supply chains as dynamic, not transactional. Traditional software purchases may involve defined deliverables and stable products, but AI systems evolve continuously. Models are retrained, datasets shift and performance can drift. Procurement cannot remain a one-time event followed by passive monitoring. Organisations need ongoing reassessment of models, data sources, and compliance posture – treating the supplier relationship as requiring continuous due diligence rather than periodic contract renewal.

Equip teams with AI literacy. Legal and procurement professionals cannot rely solely on technical teams to flag risk, nor can they defer entirely to vendor expertise. A level of AI literacy whereby individuals can recognise when a model is a black box, or understand what “accuracy” means in context, is now a core competency and organisations that invest in upskilling these teams are better positioned to spot misalignment early. The more individual knowledge and accountability organisations can develop within, the better!

Build verification into the process. Contractual protections must be paired with verification mechanisms. Tie payments or contract renewals to independently verified performance metrics, and require suppliers to demonstrate, not just claim, that their systems meet the standards they’ve promised. And critically: be prepared to walk away from suppliers who cannot provide clarity or evidence to support their claims. Don’t get caught up in the FOMO.

Preventing disputes through strategic procurement

As AI becomes central to business operations, the most common cause of AI-related litigation won’t stem from deliberate wrongdoing, but from assumptions made as buyers fail to question, verify or define expectations early, inviting disputes later down the line.

The situations leading to dispute identified in this article are all entirely preventable with adequate due diligence, and partnering with organisations that hold full control and understanding of the entire AI stack they are providing.

So, the organisations that will succeed in this next phase of AI integration won’t be those who approach AI procurement the same as traditional software, but those who reassess the fundamental differences AI systems bring, ask better questions, and demand transparency from suppliers at every stage.

The solution is cultural as much as contractual. Litigation born from disappointment is preventable; it just requires intentional due diligence and strategic alignment from the outset.

Enterprise AI 2025 Report