Cyber security and supply chain management have always been intertwined, but never more so than in today’s AI-driven world. With businesses racing to adopt machine learning models and advanced analytics, the stakes for robust, end-to-end protection keep climbing. Data management and security requirements are also evolving: the EU’s Corporate Sustainability Due Diligence Directive emphasizes risk management and transparency, while the EU and UK GDPRs have long mandated a risk-based approach to data protection.
Beneath these regulations lies a pressing reality: the real commercial value of AI arrives at the point of inference. Models are not only being trained; they’re being deployed for real-time decision-making, making infrastructure a central issue. When AI infrastructure is purpose-built for inference at scale – an area where providers like Stelia offer robust solutions – organizations can shift from theoretical compliance to practical security and performance.
The Evolving Regulatory Landscape
For years, data security has been a regulatory focus, especially within the EU. The Corporate Sustainability Due Diligence Directive indirectly compels companies handling large volumes of data to consider ethical and security implications. Meanwhile, GDPR frameworks require a thorough, organization-level analysis of risks, forcing businesses to adopt robust cyber practices.
Now, the UK Government has introduced the Voluntary Code of Practice for the Cyber-Security of AI, published in January 2025. This Code expands on existing requirements and underscores the importance of viewing AI not just as an algorithmic challenge but as a system-wide effort. Compliance with AI-specific guidelines means rethinking everything from how data is collected and processed to how third-party integrations and cloud services are managed. In this sense, AI models might be likened to the engines of a vehicle – powerful but incomplete without the supporting infrastructure. The Code clarifies that security must permeate every layer of that “vehicle,” from design to deployment.
Diving into the Voluntary Code of Practice
The Code places a heavy emphasis on supply chain security, making it clear that any weak link – whether in software components, third-party services, or data providers – can compromise an entire AI system. By calling for transparent documentation, continuous model evaluation, and clear communication around updates, the government aims to create an environment where best practices become standard procedure.
Beyond these broad requirements, the Code introduces a level of detail that AI stakeholders may find welcome. Previous regulations often left businesses interpreting security obligations on their own; this new Code, while voluntary, provides more tangible guidelines.
Yet, it’s worth remembering that guidance alone can’t guarantee security. Organizational leaders must ensure they have the right tools to execute on these measures. As the analogy goes, AI models are just engines – robust infrastructure builds the vehicle that allows them to run securely and efficiently.
Principle 6: Secure Your Infrastructure
AI infrastructure spans everything from APIs and data pipelines to cloud computing services. Recognizing the complexity of these components, the Code highlights several critical actions:
- Tighten Access Controls
Regularly assess and update permissions for APIs, models, data, and processing pipelines. Only those who genuinely need access should have it. - Protect APIs from Exploitation
Put in place controls to guard against reverse-engineering or data poisoning attacks. Strong authentication and rate limiting can reduce exposure. - Isolate Development Environments
Keep training and tuning processes separate from production environments. This isolation is especially vital when working with proprietary or sensitive data. - Embrace Transparency in Vulnerabilities
Encourage reporting of vulnerabilities by both internal teams and external researchers. Swift disclosure often prevents escalation. - Plan for Incidents and Recovery
Develop an AI-specific incident management plan. Traditional business continuity frameworks are a start, but AI systems have unique failure modes. - Consider Cloud Compliance
When using third-party cloud platforms, ensure contractual agreements align with these new security standards. This echoes past lessons from GDPR’s impact on service providers.
Underpinning these recommendations is a central theme: as more AI models transition from experimentation to live decision-making, optimized AI data mobility eliminates latency bottlenecks and ensures that security measures don’t degrade performance. Infrastructure solutions that can efficiently handle vast data flows – such as the architectures championed by Stelia – are particularly relevant here. By maintaining high levels of throughput and reliability, organizations can reduce the chance of disruptions that might leave systems exposed.
Principle 7: Secure Your Supply Chain
AI systems typically rely on software components, third-party models, and external data sources. These elements form a supply chain, any part of which could be a point of vulnerability.
- Adopt Secure Supply Chain Practices
The Code references the U.S. Software Bill of Materials (SBOM) as a resource. Understanding every component, from open-source libraries to proprietary modules, enables better risk management. - Vet and Justify External Components
If incorporating elements lacking robust security documentation, be prepared to document the associated risks and justify their inclusion. - Mitigate Risks with Controls
High-risk components should come with transparent mitigation strategies shared among stakeholders. Openness fosters trust and accountability. - Re-Evaluate Models Regularly
Continual monitoring and testing can catch vulnerabilities that surface only after deployment. - Communicate Model Updates
Before rolling out a significant update, clearly inform end-users. Transparency remains key in building trust and enabling risk-aware decision-making.
These measures remind us that the AI economy will be defined by execution, not experimentation. It’s one thing to conceptualize sophisticated models; it’s another to put them to work in real-world scenarios that demand availability, speed, and ironclad security. Supply chain vulnerabilities directly threaten that ability to execute. By embedding security practices at every stage – from model development to integration with external services – organizations safeguard the path to commercial value.
The Importance of Supply Chain Security in AI
An AI system doesn’t exist in a vacuum. It relies on an ecosystem of external libraries, data streams, hardware, and network providers. Even a small weakness in one link – like an unpatched open-source component or an unverified data source – can compromise the confidentiality, integrity, or availability of the entire system.
When these vulnerabilities surface at scale, they can disrupt operations, erode consumer trust, and expose sensitive data. As a result, many organizations find themselves reevaluating how they source, vet, and maintain AI-related components. This is a shift from ad-hoc approaches to a more structured, proactive framework guided by the new Code.
Next Steps
Looking ahead, regulations for AI systems will likely tighten further. The UK’s Voluntary Code of Practice is a strong indicator of how policymakers plan to shape secure AI environments – by encouraging transparency, continuous monitoring, and responsible development.
For organizations aiming to stay ahead, compliance efforts should run in parallel with building out an ecosystem geared for real-world performance. Inference is the center of AI’s commercial value, which means that to unlock genuine ROI, businesses must ensure security protocols are embedded into their operational workflows – at scale.
This is where an infrastructure solution designed for safe, high-speed execution proves its worth. Providers like Stelia reinforce the principles outlined in the Code by offering a secure, risk-aware environment that can adapt to continuous updates and evolving AI models. From AI inference to data mobility and beyond, Stelia’s emphasis on security-by-design complements government guidelines, enabling organizations to move from theoretical compliance to tangible results.
Ultimately, the AI supply chain must be both innovative and resilient. By embracing best practices and leveraging proven infrastructure partners, companies can chart a path forward that balances compliance, transparency, and the real-world execution needed for AI success.