Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

How studios are integrating AI into production pipelines

In film and TV, the question is no longer whether to use AI – it’s whether the systems surrounding it are built with the same rigour as the tools themselves.

The debate about whether AI belongs in film and television production is beginning to shift. While questions around its role in creative work remain, the conversation is increasingly moving from whether AI should be used to how it can be integrated responsibly into production workflows.

That shift was visible throughout AI on the Lot’s first London event last week. Hosted at Framestore, one of the world’s leading production companies, and moving across some of London’s most advanced creative environments over the following days, the event brought together filmmakers, studio executives, technologists, and VFX practitioners — many of whom have been integrating AI into production long before it became a headline topic. What emerged were conversations around design, governance, and where human judgement sits inside systems that are increasingly capable of shaping creative outcomes directly.

Where AI sits in production

Framestore’s Suzanne Jandu and her team spoke about AI as an increasingly integrated part of the production process, though the picture is more differentiated than a single narrative of adoption suggests. Directed, task-specific applications such as rigging optimisation, segmentation, and denoising are increasingly embedded inside live pipelines. Generative systems, however, are being deployed more selectively, and typically require additional governance and client approval before they can be used across productions.

Directed systems produce deterministic outputs, executing defined instructions consistently, operating where creative decisions have already been made and must be honoured. Generative systems introduce creative latitude: options, variations, and material that a human then judges and shapes into something intentional. Both have a role in modern production, but they are not at the same stage of deployment, and treating them as equivalent obscures what studios are actually navigating. Understanding how to govern these systems with sufficient clarity, knowing where each operates and why, is the strategic challenge facing studios right now.

Because where that line is drawn directly determines who can claim authorship. Place generative AI too far upstream, and it begins occupying decisions that carry the intent of the work; decisions that belong to the director, editor, or writer of a project. But define its role too narrowly and you forfeit the genuine value it offers.

Beyond the model

The industry’s attention to date has been largely captured by model advancements and expanding capabilities, understandably so. But capability without considered placement is difficult to govern and harder to scale. Effectively governing these implementations requires deliberate design at every layer. At Framestore, AI tools must pass through a formal approval process before being deployed across productions. This includes checks on dataset provenance, validation of licensing and commercial usage rights, and review of the full dependency tree of models and training data. Once approved, studios are building systems to track how models are used in practice; locking specific models to particular productions, monitoring access, and implementing security safeguards, including malware protection and environment segregation between shows.

Many workflows today are also hybrid. Generative models are frequently combined with traditional CG inputs – depth maps, canny lines, AOVs, and layout data – to provide greater control over outputs. And AI outputs rarely represent finished work; they are more often used for ideation and exploration, with artists curating and refining results through existing craft processes.

This is what considered AI adoption actually looks like in practice: disciplined integration that knows precisely where AI operates, under what conditions, and with what oversight. The more important questions for most organisations are not about access to capability, but about whether the systems surrounding that capability are built with equivalent rigour.

When AI shapes the final work

And today, the stakes of these decisions are no longer hypothetical. AI is entering production environments in ways that don’t just assist human processes but increasingly shape elements of the final work. Generative tools are already being used in areas such as visual development, crowd variation, and performance extension, where they can produce options that artists then refine and curate.

In immersive formats, these questions become even more pronounced. When Magnopus expanded The Wizard of Oz for the Sphere in Las Vegas, the team had to extend scenes beyond the original frame and generate additional performance where characters moved off-screen. AI tools helped reconstruct and extend material while still involving hundreds of artists and technicians in the final production. Experiences like this blur traditional boundaries between film, theatre and interactive environments.

There is also an open acknowledgement among practitioners of the challenges still being navigated. Generative outputs can be unpredictable, complicating the process of bidding for projects that rely on emerging tools. Studios frequently run traditional and AI workflows in parallel rather than replacing one with the other, and sustainability considerations around compute usage are increasingly part of the conversation, as are unresolved questions around copyright and dataset sourcing.

These operational realities highlight a broader shift: the real challenge is no longer access to AI capabilities, but designing clear production systems where those capabilities can be used responsibly without eroding human creative control.

The foundations that matter

For film and television organisations moving AI into production-scale environments, operational readiness is being truly tested. Not in the sophistication of the models being used, but in the clarity of the systems surrounding them – who owns what decisions, where human judgement is non-negotiable, and how creative intent is preserved as AI takes on a more active role in the work itself. The organisations that will lead are those that treat these as foundational design principles, not operational afterthoughts. That was the consistent thread running through AI on the Lot last week, from Framestore’s pipeline sessions to the wider conversations across London’s production environments. The capability has arrived; embedded governance and accountable systems now need to meet it.

Enterprise AI 2025 Report