Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

AI governance is still broken, and the clock is ticking

AI is global, regulation isn’t. Local rules are no match for global systems – and that disconnect risks turning governance into chaos, not control.

Let’s not sugarcoat it; the state of AI governance is a mess. Worse yet, it’s showing no signs of getting better anytime soon.

Just this month (July 2025), Washington saw a dramatic clash over the future of AI regulation. The Trump-aligned proposal to block states from enforcing their own AI laws for ten years, a move washed in to the “Big, Beautiful Bill”, was flatly rejected in the Senate. The message from lawmakers – States should be free to protect their residents (good) with a messy web of conflicting rules (not good).

Meanwhile, over in Brussels, the European Union is forging ahead with its own AI Act, brushing aside rumours of delays to enactment and signalling its intent to lead on AI governance.

Enterprise AI 2025 Report

This transatlantic divergence should worry anyone paying attention. AI is a borderless, general-purpose technology, yet we’re attempting to regulate it with tools and mindsets built for a pre-digital world, fractured by political agendas and local priorities.

Lawmakers writing these AI regulations lack the technical understanding needed to grasp the technology they’re governing, seemingly ignoring highly experienced legal professionals and academics with years of experience in the sector. This raises the question: are these laws designed to protect people or simply to create frameworks governments can navigate? Without expert insight, regulation risks becoming symbolic, not practical and ultimately ineffective. This isn’t protection, this is chaos.

A patchwork that is already tearing

The current regulatory approach is a tangled mess; a patchwork of state-level and national laws, each with its own definitions, demands and deadlines. The problem isn’t just complexity, it’s incoherence.

AI doesn’t neatly respect industry boundaries, let alone state or national ones. A single model can be deployed globally, retooled across sectors and evolve faster than any regulator can track. Yet instead of crafting a unified framework, lawmakers are scrambling to patch together fragmented responses while the ground shifts beneath them.

In the US, proponents of the failed federal AI moratorium (including some prominent tech figures) warned that a labyrinth of 50 state regulations could cripple innovation and hand China a competitive edge. OpenAI’s own Sam Altman bluntly told senators that managing compliance across so many jurisdictions would be practically impossible.

But their warnings weren’t enough.

The AI freeze was stripped from the bill leaving America exactly where it was to begin with: fragmented, uncertain and still without a national game plan.

The big AI bet

Across the pond the EU is betting big on regulation. The AI Act, modelled on the bloc’s approach to the GDPR, aims to impose a sweeping risk-based framework on AI systems used within its borders. On paper it sounds like progress with transparency for low-risk systems, tight controls for high-stakes use cases like hiring or policing and new rules for the looming threat of general-purpose AI (GPAI).

But the tech is evolving faster than the rules. A single AI model might help you write an email today and guide a cancer diagnosis tomorrow. Trying to classify these systems by risk level is like nailing jelly to an ever moving wall. And the Act’s GPAI provisions? Vague, reactive and already chasing a moving target with consultations earlier this year focussing on what GPAI even is (and we’ve not even got to artificial general intelligence yet…).

To their credit, EU lawmakers have acknowledged the problem and tried to adjust late in the game; see the brand new GPAI Code of Practice. But even as they push ahead, the uncomfortable truth remains: Europe’s AI rules may be outdated before the ink is dry.

One world, many rules

At the heart of the problem is a basic contradiction; AI is global, regulation isn’t.

A generative model might be trained in California, fine-tuned in London, hosted in Singapore, and used by someone in Nairobi. So whose laws apply? Whose values shape the outcomes? Who is accountable when things go wrong? And how does this ever become ‘Sovereign’.

Right now there are no clear answers. The US and EU are each pursuing their own philosophies – decentralised experimentation versus centralised oversight. Other countries like the UK, are forging different paths altogether. And while there are talks at the G7 and the UN, global coordination remains little more than a diplomatic soundbite.

What we’re left with is a high-stakes experiment playing out in real time. The world is testing governance models in parallel, without consensus, without interoperability and without a safety net.

Time to wake up

Here’s the hard truth; local rules are no match for global systems.

AI is rapidly becoming foundational to how we live, work, make decisions and frankly exist. If we don’t find a way to align on basic principles of safety and accountability, we risk letting this technology outpace our ability to control it.

Yes, fragmented governance offers room for experimentation. Yes, different regions have different values. But let’s not pretend that a patchwork of uncoordinated rules is a long-term solution. At best, it’s a stopgap. At worst, it’s a recipe for regulatory chaos and ethical disaster.

The world doesn’t need one-size-fits-all regulation, but it does need a shared vision. Without it, we’re not governing AI, we’re just reacting to it. And sooner or later, the consequences of that passivity will catch up with us.

The question is no longer whether we need global coordination; it’s whether we’ll get there before it’s too late.

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Enterprise AI 2025 Report