Last week, Anthropic launched Claude Cowork, a version of the AI-coding tool Claude Code designed for non-developers. This launch marks a fundamental shift as major LLM capabilities move from conversational interfaces to application-level tools that directly affect business outcomes.
Dave Joshua, Stelia’s Chief Growth Officer, recently spoke with Olivier Legris, Claude Code leader and co-founder of Alter, to unpack what this means in practice, and what the future of human-AI collaboration looks like in corporate environments.
First, they assessed what distinguishes this Anthropic release from the steady stream of incremental model announcements over the past year.
Dave Joshua: What’s the difference now to how people were using Claude even a few months ago? What does this really mean for people in day-to-day work?
Olivier Legris: Yes, so I think there are two big things that happened in the past few months. First is the quality of the model with Opus 4.5, and the second is the application layer on top of those models. And first it happened with Claude Code and now with Cowork, where anyone can do real work and manage agents; agents connecting to 3rd party tools in the right way. So, before I would say it was mostly a chat interface and now it’s jobs getting done.
This distinction Olivier draws, from chat interface to application-level impact, signals a key shift in how enterprises should evaluate AI investments. The question is no longer whether models can reason through problems, but whether agentic capabilities can integrate into operational workflows to deliver measurable outcomes. This demands a different approach both from AI systems and from the teams using them, as the ability to interact with enterprise tooling, produce work artefacts that meet business standards, and maintain state across tasks becomes critical.
Dave Joshua: So, when teams are using this properly, what kind of work do you think this is going to augment or replace first in your mind?
Olivier Legris: The big innovation here, that comes, let’s be honest, from the work that was done by Manus, is to equip AI with Python libraries running in a virtual machine to be able to do white-collar artefact. We’re talking about Excel files, PowerPoints, every type of file required in the corporate environment. This is the new direction.
Dave Joshua: Yeah, interesting. If we’re thinking about this for all leaders, do you think this has an impact on junior staff or how you engage as a team?
Olivier Legris: I think it has an impact on every role, from juniors to the boardroom, because suddenly it means that AI can actually deliver the final work of any task. It’s more a question of doing things differently. The big innovation that Claude Code or Claude Cowork brings is that instead of actually doing the work, you become an architect, you become an orchestrator. That’s the big change in how you work, and the people who can adapt to this new way of working will benefit from the tools, and the people still doing things the old school way will struggle to get productivity gains.
Yet Olivier’s observation about shifting from executor to architect and orchestrator raises an immediate priority. As these systems gain direct access to enterprise environments, filled with proprietary intelligence, data repositories, and business-critical documents, the need for governance infrastructure becomes just as important as the advancement of capabilities themselves. The experimental nature of current implementations demands clear guardrails embedded throughout. Olivier considers this.
Dave Joshua: Do you think there needs to be any guardrails put in place for this? Do we need to slow this down at all, or are we good?
Olivier Legris: Yes, we need guardrails. If you look into the details, the terms and conditions of Cowork, it’s written black and white: don’t use Cowork for important work. This is for a very simple reason. The way Cowork works, it will copy your files into a VM machine. So literally you’re literally giving it access to your folder, if not your whole computer, your entire files and Cowork can even delete your whole computer. It’s very experimental at the moment, so yes, guardrails need to be in place. But what is interesting is that it gives an indication of where the future of work is going.
Olivier’s candour about Cowork’s current limitations highlights the experimental stage of development these capabilities are at and what this means for enterprise AI adoption today. For organisations, this demonstrates the importance of proactive AI integration, where capabilities are adopted alongside the foundations that enable secure, governed, and collaborative deployment.
Dave Joshua: OK, fantastic. Last question for you, what do you think we should be watching for over the next few months as these kinds of tools settle in, and where is this going next?
Olivier Legris: I think collaboration. Right now, the way these tools work is very unique in terms of the user. So, you do your work, but there is no collaboration aspect. You can collaborate with agents and with scripts, but I think this will be very powerful once you have humans collaborating together on a shared workspace or shared Cowork and that you can interact with AI together and not in your own little corner.
The shift from individual AI assistants to collaborative AI workspaces that Olivier anticipates represents the next architectural challenge for enterprises. It’s not enough to give individuals access to powerful tools. Instead, the value will come from systems that enable teams to work alongside AI in shared environments, with appropriate access controls, audit trails, and governance frameworks that scale with adoption.
In much the same way, for organisations moving beyond pilots, the priority must become building the infrastructure that can support this evolution without compromising security, compliance, or operational resilience.