LLMs on Air: Gen AI Use Cases for News, Sports, and Entertainment
Streaming Media Connect 2025 featured a session on using generative AI in news, sports, and entertainment, moderated by Brian Ring of Ring Digital. The panel included industry veterans Andy Beach, Pete Scott of Play Anywhere, and Raffi Mamalian of Sinclair. They discussed the transformative potential of AI in media production.
Key topics included the ethical use of AI, the challenges of AI dubbing, the future of personalized content, and the future of personalized content. The session highlighted the importance of governance in AI implementation and the evolving role of AI agents in creating dynamic and personalized media experiences. The discussion underscored the need for a balance between automation and human oversight to maintain authenticity and accuracy in media content.
The ethical use of AI
Ring discussed the recent deal between Lionsgate and the AI company Runway, which is centered on the creation and training of a new AI model that is customized on Lionsgate’s proprietary catalog. He asked Beach, who has been advising FlikForge, a monetization platform for GenAI images and video, “What do you think Lionsgate wants to get out of this? Is it just getting money to make the tool better? What's your take on using studios to train data and things like this?”
“FlickForge is working on an ethical AI approach on how we allow people to work with LLMs and generative AI and still retain ownership of their brand or even monetize it themselves through what they're doing,” Beach said. “I think what Lionsgate is looking for here is a good important first step for where everybody is going. Everybody who is sitting on an archive of content is now trying to wake up and think about what the implications are of this kind of IP in a data age and how and where they're going to monetize it. This is not necessarily the right approach, but it is a good start. I think it is probably more of a test of just what is the value of this purely as data, not as content [to be licensed] for a deal somewhere, but it is the beginnings of a direction that we're going to see [more of].”
Mamalian of Sinclair talked about the ethical implications of AI in news broadcasting. “It's a delicate approach, especially when you're dealing with local news,” he said. “The community and the authenticity is paramount and you don't want to lose any of that by influencing the content with AI generated media avatars and things of that nature. There's going to be a little bit of a time ramp before the public is comfortable with that. And obviously we're at a very delicate time in broadcast television, and so ensuring that we keep that authenticity is important. That said, there's a lot of different ways where we can utilize AI to empower our producers, our journalists, and things of that nature to get more immersive with this storytelling and get them to spend more time going several layers deep with a particular story.”
The challenges of AI dubbing
Ring brought up some of the challenges of effective AI dubbing. He asked Mamalian how Sinclair is currently handling the various issues around accurate AI translation for their programming.
“Last year, the Tennis Channel was launching internationally, and we wanted to experiment with lip dubbing, lip sync, and translation into multiple languages,” Mamalian said. “We launched Tennis Channel International in Germany, Austria, Switzerland, Spain, and India, and we wanted to see what those translations would look like and see if the lip syncing would work. And to a large part it did. It's not instant. It still takes a lot of editing work to make sure the translations are accurate, and they're timed correctly because certain phrases will be longer in one language and shorter in another. And so that can create a lot of inconsistency with the cadence of the speaker. We also tested it in a round table environment. You have [an] interruption type of situation where everybody's talking over each other, and it has a lot of difficulty [handling] that. One workaround we figured out with that was if we can just take the individual feeds and have them separately, then the AI will be able to pick that up separately and be able to identify the pieces more clearly.”
The future of personalized content
Ring said to the panel, “What are the other cutting-edge ways that LLMs can be used to create more dynamic and personalized entertainment experiences for viewers?” He asked Pete Scott of Play Anywhere to comment on this, since it is so relevant to what they do.
“I think what you're going to start to see is when you can combine databases of affinities that people have, Soctt said. “I'm a LeBron James fan; I love the Lakers. I think you're going to see more and more snackable content that's personalized for all the users. You're starting to see a little of that on Thursday night football where they're basically personalizing ads based on what you bought in the Amazon store. So, if I'm a pet owner and I buy dog food, maybe the ad that I get at ad break for Thursday night football is going to be at pet ad as opposed to something else. You're going to see large language models and AI agents be the short order cooks that are basically putting together personalized content, ads, e-commerce, etc., for the user to dive into and just be more engaged.”
Balancing automation and human oversight to maintain authenticity and accuracy in media content
The discussion ultimately underscored the need for a balance between automation and human oversight to maintain authenticity and accuracy in media content.
Ring said, “Let's take a pivot a little bit into the FAST world. Andy, I know you did some work in FAST, with Fremantle, that was a little more on interactivity, but why don't you throw in here a little bit about AI playlisting, you've seen some of that stuff.”
“It's not a single LLM element here that we're talking about that puts this together,” Beach said. “This is where the power of an agent-based workflow comes together because you need something that goes and understands the content at an intimate level, and there's really two different pieces of that. There's the traditional video indexer. I will do a frame-based decomposition of every picture in this and understand every object and the transcript. Then there's the temporal understanding, and that's more like what we're getting with LLMs today, which [gives] a summary of what happened in a video. And that is important because it captures something over time. Traditional metadata tooling isn't going to catch what a zoom shot or car crash is because that's something that happens over multiple frames, and they're really focused on individual frame decomposition as part of what they do.
“So that's the first two pieces of the puzzle that you need to [have] agents that understand the lens of the content. Then we need something like what Pete was talking about, that intimately understands me, all the data about how I've consumed things, all the data that I've freely given, both the bias and the unbiased information about how I watch stuff. He might say he's really a LeBron James fan, but he probably also has a couple people he hate watches, so he doesn't want to say he's a fan of them, but he's always going to make sure he watches them because he wants to check out what they're doing. And so that might be unbiased information that a system pays attention to. And then you start doing the analysis. The LLM has all the data from those three different vectors and maybe a little bit from the social zeitgeist of what's being talked about, and that starts to be the blend of what I'm going to get fed.
“I think the important part of that is that we don't just go down a rabbit hole of constantly reinforcing only the things you like. Part of personalization is challenging you as well. What are the new things that are outside of this, and how often should I inject something new? And does it help reinforce the things that you really like, or does it show you a new branch of something you didn't realize that you liked? And that's where these systems start getting more sophisticated.”
Related Articles
Streaming Media presented its 16th Connect virtual conference February 25-27, featuring speakers from YouTube, Meta, Amazon, Roku, Akamai, Google Cloud, Plex, DAZN, Fremantle, A+E, Vevo, Philo, Tubi, The Trade Desk, Sinclair, Vizio, Revry, and more, and session topics ranging from live streaming delivery and FAST infrastructure and monetization to app and UX to programmatic vs. direct advertising to CTV and the OS wars.
28 Feb 2025
AI is rapidly changing the way people interact with entertainment, and companies are embracing AI to streamline operations and improve efficiencies in producing, marketing, and distributing content. Hub Entertainment Research released its first study on AI in Entertainment, revealing that while most consumers have heard of generative AI, very few understand how it works or what it's for. Jon Giegengack of Hub Research discusses the report's findings and implications in this Q&A with Streaming Media's Tyler Nesler.
21 Jan 2025
The applications of Generative AI in streaming are seemingly endless, but what are specific ways that AI can make streaming content more discoverable, more personalized, more engaging, interactive, and more effective for advertisers in leveraging targeted content to reach the right customers? Microsoft's Andy Beach, Vecima's Paul Strickland, mireality's Maria Ingold, Alvarez & Marsal's Ethan Dreilinger, and Reality Software's Nadine Krefetz explore the possibilities in this clip from Streaming Media Connect 2024.
17 Dec 2024
A Q&A with Ron Gutman, CEO of Wurl, about their Gen AI-powered technology that matches CTV ads to the emotion and context of what viewers are watching using Plutchik's Wheel of Emotions. This technology has identified the emotions that make up the "Holiday Spirit" and found that the key emotions are "trust and joy."
26 Nov 2024