Follow

Keep up to date with the latest Stelia advancements

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

What to expect at CES 2026: AI trends reshaping media and entertainment

How generative video and synthetic talent are moving from fringe implementations to mainstream media production infrastructure in advance of CES 2026.

As CES 2026 approaches, Stelia is preparing to meet with customers and partners across the AI ecosystem. Ahead of the show, Ula Nairne, VP Media & Entertainment, has identified the trends she expects to see this year that will reshape how media is created, distributed and experienced.

After years of experimentation, 2026 is expected to mark a clear inflection point. AI is transitioning from fringe implementations to core production infrastructure.

In our CES series, we’ll unpack the developments underlining this shift and the architectural foundations media organisations require to capitalise on these capabilities as the industry accelerates towards production-scale deployment.

Generative video: from experimental to essential

2025 saw the release of multiple large-scale AI models, including OpenAI’s Sora, as generative AI video capabilities gained significant industry attention. Reaching over 1 million downloads in just five days, Sora’s generative video capabilities lowered barriers to video creation that previously required extensive technical expertise.

However, wider adoption of generative video in mainstream media to date has remained limited. We expect 2026, beginning with CES establishing early momentum, to mark the transition of generative video from experimental technology to a production-ready tool.

This evolution is already in motion. Netflix pioneered this shift in July when El Eternauta, an Argentinian science fiction series, became the first Netflix original to integrate generative AI footage into final broadcast content. Scenes that once demanded extensive expertise and substantial budgets are now within reach as accessible production capabilities, with open access to tools like Sora and Runway enabling visually ambitious content on considerably smaller budgets.

Beyond mainstream media, generative AI video is becoming central to marketing campaigns across industries. Coca-Cola emerged as one of the earliest movers in this space, reimagining its iconic “Holidays are Coming” film for the second consecutive year with GenAI. Building on 2024’s groundbreaking achievement as the world’s first entirely GenAI-created film on broadcast media, their 2025 campaign demonstrated deeper maturity with advanced technical precision, cinematic storytelling, and production quality, all driven by AI capabilities.

This maturity is equally evident in Monks’ recent campaign for Google Fi Wireless, which integrated Google’s generative AI tools across the production pipeline to create photorealistic animal characters that visualise abstract network features. Executing at this level requires deep technical and creative expertise to achieve character consistency, photorealism, and lip-synching, and demonstrates how the most AI-forward media organisations are moving generative video beyond isolated effects toward more integrated production processes.

2026 is expected to see these isolated successes turn into industry-standard practice, as media organisations move from isolated use cases to core production infrastructure. However, as this growth occurs, the technical and governance requirements scale in parallel. Achieving this at enterprise scale demands that media companies architect systems capable of maintaining quality control, tracking asset lineage, and preserving IP rights as AI-generated content embeds itself into traditional production workflows. As we explored in our recent analysis of AI security in media, organisations capitalising on these capabilities must ensure their systems are architected not just for current functionality, but for long-term competitive resilience as the landscape continues to evolve.

The rise of synthetic talent

Following generative AI videos’ transition to production infrastructure, we expect 2026 to see the rise of synthetic celebrities – AI influencers, virtual actors, and digital avatars – accelerate significantly.

AI influencers have been present online since 2016, but captured widespread attention this year when computer-generated personalities like Mia Zelu “attended” Wimbledon, accumulating over 165,000 followers and generating substantial engagement and income for the teams behind them.

For the media industry, 2026 represents the next evolution: bridging the gap between AI influencers operating primarily on social media and their integration into mainstream programming and advertising, with virtual talent increasingly appearing alongside traditional performers.

Pioneering this transition, UK-based Particle6 has launched Tilly Norwood, an AI-generated actress designed to work alongside human talent in professional productions. Unlike social media influencers, Norwood represents a new domain for AI in entertainment: synthetic performers built for scripted content, commercials, and episodic television. The company has already secured partnerships with production companies and demonstrated that AI actors can deliver consistent performances across multiple projects, something traditional casting struggles to achieve at scale. With approximately 40 additional AI actors in development, Particle6 is accelerating the growth of this category within the entertainment workforce.

While the company’s launch of Tilly Norwood sparked debate within the industry about job displacement, Particle6 founder Ben Jeffries has emphasised that virtual actors address specific production challenges rather than replacing human talent entirely. Jeffries has been clear that these AI actors are intended for roles that struggle to find casting due to budget constraints or scheduling conflicts, rather than competing for parts that would otherwise go to human performers. They offer flexibility for reshoots, consistency across global campaigns, and cost efficiency for roles that would otherwise remain unfilled due to budget constraints.

Coupled with the advances in generative AI video capabilities within television production, we anticipate these two AI developments rising in tandem within the entertainment industry. However, as adoption of synthetic talent accelerates, questions of transparency and trust move to the forefront.

Rapid integration of AI capabilities into an industry built on human storytelling and creative expression demands robust governance frameworks embedded from the ground up. As regulation remains in flux, ethical and cross-disciplinary oversight becomes essential to enabling responsible AI deployment at this pace. When integrating synthetic talent into production workloads, organisations must integrate legal expertise, ethical design principles, and deep architectural understanding, ensuring these capabilities serve as a complement to human creativity rather than an unregulated replacement. Media companies must quickly learn to treat governance as the foundation that enables sustainable innovation.

What comes next

As CES 2026 approaches, generative video and synthetic talent represent two of the most immediate and visible developments in how media will be created and distributed in the year ahead. These capabilities are moving beyond proof-of-concept into production infrastructure, fundamentally changing the economics and possibilities of content creation.

However, successful deployment of these technologies at scale demands more than adopting new tools. Media organisations must consider the architectural foundations that enable these capabilities to operate governably, securely, and sustainably. The infrastructure decisions being made today around quality control, IP protection, and governance frameworks will determine which organisations can capitalise on these innovations and which find themselves constrained by systems not built for this level of complexity.

In the coming weeks, we will continue this series by examining additional trends emerging from CES, including immersive entertainment experiences and agentic AI reshaping personalised content discovery. And equally, we will further explore the architectural imperatives that separate sustainable AI deployment from hype-driven, costly initiatives which won’t stand the test of time.

Enterprise AI 2025 Report