Events

Reflections from Fraunhofer FOKUS MWS 2025: A Look into the Future of Media Tech

Nicolás Levy

Nicolás Levy

VP of Technology

Reflections from Fraunhofer FOKUS MWS 2025: A Look into the Future of Media Tech

Summary

"Nicolas Levy, VP of Technology at Qualabs, had the chance to speak at the 12th Fraunhofer FOKUS Media Web Symposium in Berlin, an event packed with innovation, and community. In their session, he shared key learnings from prototyping CMCD v2, an evolving standard reshaping how we approach video streaming data and analytics. From implementation tips and open-source tooling to emerging use cases like multi-CDN monitoring, content steering, and live sync, this recap dives into the insights, people, and momentum that made the event unforgettable."

A few weeks ago, I had the pleasure of presenting at the 12th FOKUS Media Web Symposium. It was a fantastic event, and I was excited to share some of the work we've been doing at Qualabs with the latest developments in the Common Media Client Data (CMCD) specification. My talk, "Beyond the CDN: Use Cases, Tools, and Learnings from Prototyping with CMCD v2" focused on how this evolving standard is set to revolutionize video streaming analytics and operations.

For those who aren't familiar, CMCD v1 was a game-changer, creating a standardized way for media players to send playback data to CDNs. This allowed CDNs to make smarter decisions, improving Quality of Experience (QoE) for viewers. It was a simple, effective set of key-value pairs sent along with media segment requests.

But the story doesn't end there.

Introducing CMCD v2: A Universal Data Interface

CMCD v2 expands on this foundation, transforming a simple player-to-CDN data channel into a powerful, universal interface for all kinds of player data. It's no longer just about optimizing the CDN; it's about unlocking a world of new possibilities for analytics, multi-CDN switching, ad insertion, and so much more.

he two biggest innovations in CMCD v2 are the new Response and Event modes.

  • Request Mode: This is the classic CMCD v1 functionality where data is sent to the CDN with each segment request.
  • Response Mode: This is a major leap. After a player makes a request to a CDN, it can now send a copy of that CMCD data, enriched with CDN response metrics like time-to-first-byte (TTFB), to a different endpoint, like an analytics server. This allows for direct, real-time performance measurement of your CDNs.
  • Event Mode: This mode completely decouples data reporting from CDN requests. Now, data can be sent based on specific player events, such as the user pressing play/pause, an error occurring, or an ad starting. This opens the door to detailed, real-time user behavior and QoE analytics.

This concept of a foundational interface is key. Because CMCD v2's Event Mode decouples data reporting from media segment requests, its potential extends far beyond traditional HTTP streaming. Imagine a world where players across all platforms—from IPTV and broadcast to WebRTC and peer-to-peer—can all report playback events using the same standardized language. This turns CMCD into a truly universal playback-related interface, allowing for a unified analytics and operations backend regardless of how the content reaches the viewer. By embracing it, we can build more efficient, intelligent, and reliable streaming services.

New Use Cases Unlocked by CMCD v2

During the presentation, I walked through several practical use cases that these new features enable:

  • Centralized Multi-CDN Performance Monitoring: By using Response Mode, you can gather performance data (like TTFB and throughput) from all your CDNs and send it to a single analytics dashboard. This gives you a unified, real-time view of your entire delivery infrastructure.
  • Smarter Content Steering: With better data, content steering solutions can make more intelligent decisions, directing users to the best-performing CDN on a per-session basis.
  • Unified Analytics: Event Mode allows you to create a single, standardized pipeline for all your analytics, from QoE and ad tracking to user engagement metrics like "continue watching" and "top 10" lists.
  • Enhanced Ad Insertion: We can now provide ad decision servers with much richer context (e.g., is the player in the background? what is the device's available bitrate?) and standardize ad-tracking beacons for SSAI, CSAI, and SGAI.
  • Live Stream Synchronization: We even demonstrated a proof-of-concept for synchronizing thousands of live players by having a central server that receives CMCD and uses CMSD to send timing information back to each player.

From Theory to Practice: Our Learnings

At Qualabs, we believe in putting theory into practice. We've been actively contributing to the implementation of CMCD v2 in open-source players like dash.js and Shaka Player.

This hands-on work has been invaluable. We've learned a lot about the practical considerations, such as the need for robust client-side configuration to manage multiple data targets, filter keys, and handle different ad insertion scenarios.

To aid in this effort, we've also developed and open-sourced two tools:

  • CMCD Analyzer: A proof-of-concept tool to help developers debug CMCD v2 implementations by visualizing the data sent in all three modes.
  • CMCD Toolkit: A set of tools for collecting and analyzing CMCD v2 data both locally and in the cloud.

A Week of Synergy and Progress

One of the best parts of the event was the incredible synergy among the attendees. The discussions didn't stop at the end of the presentations; throughout the week, we collaborated on some of the key challenges facing the specification. For instance, we highlighted an issue in the CMCDv2 draft at that moment regarding how to properly track interstitial content, which is the basis of Server-Guided Ad Insertion (SGAI). I'm thrilled to report that thanks to this focused collaboration, the issue was fixed in the specification just one week after the event.

Furthermore, we also started creating a Reporting Schema for MPEG DASH. This will allow content providers to configure the desired CMCD v2 behavior in the players directly from the manifest, giving them more granular control over their analytics.

Throughout the sessions, CMCD v2 kept popping up as a recurring theme. Gwendal Simon’s talk on MoQ Analytics proposed using CMCD in clever ways to enable real-time insights, which resonated deeply with our own work. Also Will Law and Yuriy Reznik presented a novel way to track media quality by sending quality metrics across all the video workflow chain to a CMCD collector.

Sessions & Technical Highlights

With so many talks packed into just two days, there was a lot to absorb. The venue itself buzzed with technical discussions and collaborative energy. The agenda covered a wide spectrum—from ad insertion to quality metrics, from metadata to emerging standards. Here are a few sessions that stood out:

  • Server-Guided Ad Insertion — Pieter-Jan Speelmans | Pieter-Jan delivered a grounded, insightful look at implementing SG-AI. His discussion around the real-world hurdles of deployment was spot on.
  • CMSD + Media Quality Assessment — Yuriy Reznik & Will Law | Yuriy offered a technically rich dive, adding analytical depth, while Will translated it into actionable strategies for enhancing QoE using metadata.
  • DVB-I Prospects and Findings — Bram Tullemans | A clear summary of DVB-I initiatives across Europe. While the pace of adoption varies, the alignment of standards was an encouraging signal.
  • MoQ Analytics — Gwendal Simon | Gwendal’s conceptual framework was both elegant and promising, proposing CMCD as a powerful enabler for real-time streaming analytics.
  • MOQtal Prototype — Ali C. Begen & Zafer Gurel | Their QUIC-powered, low-latency research demo gave a clear view of where the MoQ future might be heading.
  • MoQ Standards Update — Will Law | Will closed the loop with updates from IETF. As MoQ continues to evolve, his transparency on challenges ahead was both refreshing and motivating.

These sessions captured the direction our industry is heading—more intelligent streaming, better metrics, and community-driven evolution.—practical tools, new protocols, and open challenges alike.

Beyond the Talks: Community and Connection

While the technical content was rich, some of the most meaningful moments happened between sessions. I had the chance to reconnect and brainstorm with industry leaders: Jason Thibeault (SVTA), Olga Kornienko (EZDRM), Jianhua Zhou (V-Nova), Will Law (Akamai), Piers O'Hanlon (BBC) Thasso Griebel (Castlabs), Daniel Silhavy (Fraunhofer FOKUS) and many others... This sense of openness and continued collaboration is what makes the video tech community such a powerful space to be part of.

Final Thoughts

Fraunhofer MWS 2025 wasn’t just about emerging protocols or new acronyms—it was about momentum. From green streaming to server-guided AI, from MoQ to CMCD v2, every session pointed to a future that’s more intelligent, flexible, and interconnected.

And honestly? I left feeling both inspired and energized. Can’t wait to keep building, testing, and learning alongside this amazing community.

— Nico

Subscribe and be part of the Qualabs’ community!

A newsletter delivering cutting-edge tech updates, industry innovations and unique experiences from Qualabs' perspective!

Stay up to date on the latest trends and stories shaping video tech.