Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

How China’s AI labs are turning chip restrictions into competitive advantage

DeepSeek’s breakthrough reasoning models and Alibaba’s efficient agents are responding to chip bans by showing how necessity breeds innovation.

Echoes of steel, solar, and now AI. China’s response to chip bans looks familiar: focus on cost, scale, and distribution to reset the market.

China’s access to top-tier Nvidia chips has narrowed again. The Cyberspace Administration of China has told major platforms to stop testing and ordering Nvidia’s China-specific parts. That pressure has pushed labs to find gains elsewhere: in training methods, in agents that plan and act, and in how quickly code becomes a service people can use.

First DeepSeek, now Alibaba

Early in September 2025, DeepSeek published a Nature paper showing a model that learns to reason through reinforcement, not human step-by-step labels. It developed reflective habits, checked its own work, and beat average human scores on selective maths and coding tests.

And mid September, Alibaba’s Tongyi Lab released an open agent for deep web research, built on a 30B-parameter model that activates roughly 3B at inference. It posts strong numbers on research benchmarks such as Humanity’s Last Exam and FRAMES, and it already runs inside Amap for travel planning and in legal research tools that cite cases and statutes.

More from less

The message seems to be “When you can’t buy every chip you want, you squeeze more from the ones you have.” When China lacked dominance in steel, in solar panels, in electric cars, it turned to cost, efficiency and global distribution to unsettle incumbents. AI looks much the same. If you can’t buy every chip you want, you squeeze more from the ones you have. You train in simulated webs, curate harder synthetic tasks as the model improves, and you ship. You also loosen the licence. Tongyi’s stack is Apache 2.0. That invites start-ups, system integrators and public bodies to try it, fork it and deploy it.

Jevons Paradox

There is a cost to this kind of efficiency. Jevons Paradox applies. Make capable models cheaper to run, and usage tends to rise faster than savings. More queries. More integrations. More total compute. China’s domestic silicon makers have an obvious opening here. If open agents spread across offices and public services, demand for local processors will surge, even if each task uses fewer resources.

Everyday impact

People feel the upside first. Trip plans built in minutes. Case law pulled with citations, then summarised cleanly. Students who can see worked steps, not just final answers. Teams that stitch these models into existing tools and cut days of manual digging into hours.

Risks

There are risks that need plain handling. Smarter reasoning can be bent to smarter misuse. Open agents can be jailbroken. A good report writer can also become a good propagandist. DeepSeek’s own paper calls out these concerns and argues for stronger controls. This is where coordination matters: not only how we build models, but how we distribute them, monitor them, and switch off unsafe behaviour.

That’s a reminder that the AI “chip wars” are about more than silicon. They raise questions about sovereignty in an algorithmic age. Beijing’s chip bans reflect a desire to tighten national control, just as Europe’s €200 billion “AI gigafactory” plan bets on territorial infrastructure as the foundation of sovereignty. But models don’t behave like steel or solar panels. They are mathematical objects, stateless by nature, copied and redeployed anywhere at no cost. Control over where they are trained offers little leverage over where and how they are used.

The Western response

So where does this leave Western labs and platforms? With a bright path, if they choose it. The United States and Europe still hold deep advantages in academia, international scale, cross-border ecosystems, and public-private sector partnerships. A positive response looks practical rather than defensive.

First, lean into applied research. Build agents that handle evidence, citations, and tools reliably. Use verifiers for maths, code, and data work. Publish clear error rates.

Second, ship safe defaults. Provide strong jailbreak resistance, red-teaming at release, and audit trails that fit with UK and EU regulation. Make opt-outs simple.

Third, meet users where they are. For legal, health, finance, and education, work through existing vendors who know the rules of the road. Bundle pilots with training and support. Quote outcomes, not just throughput.

Fourth, stay open in the right places. Release evaluation sets, reference agents, and slim models that help the ecosystem learn, while keeping higher-risk pieces behind managed services.

Finally, talk like engineers, not marketers. Name the chips, the tokens, the failure modes. Give dates. Show your fixes.

A split route to progress

The story of 2025 is not a single winner. In China, necessity has pressed labs towards inventive training and broad distribution, echoing earlier patterns in steel, solar and EVs. In Europe, policymakers risk repeating old industrial reflexes, equating territorial control with meaningful oversight, even as the systems themselves grow more fluid, more distributed, and harder to pin to a place. In the West more broadly, the optimal route runs through careful engineering, sector partnerships, and scale that can be trusted. Both sides can raise the bar.

For those building the connective tissue between these worlds, the task is clear: architect systems that let intelligence flow safely and predictably across borders, industries and agents. That is how research excellence becomes more than a paper, and how deployment avoids becoming a free-for-all. Done well, it moves us closer to a future where human and machine cognition are coordinated as reliably and usefully as electricity through a grid.

The prize goes to those who turn lab work into outcomes that feel solid in the hands of real users, while laying the foundations for responsible AI coordination at civilisation scale.


References

Enterprise AI 2025 Report