Detecting Scene Changes in Audiovisual Content
<p>When watching a movie or an episode of a TV show, we experience a cohesive narrative that unfolds before us, often without giving much thought to the underlying structure that makes it all possible. However, movies and episodes are not atomic units, but rather composed of smaller elements such as frames, shots, scenes, sequences, and acts. Understanding these elements and how they relate to each other is crucial for tasks such as video summarization and highlights detection, content-based video retrieval, dubbing quality assessment, and video editing. At Netflix, such workflows are performed hundreds of times a day by many teams around the world, so investing in algorithmically-assisted tooling around content understanding can reap outsized rewards.</p>
<p>While segmentation of more granular units like frames and shot boundaries is either trivial or can primarily rely on <a href="https://arxiv.org/abs/2008.04838" rel="noopener ugc nofollow" target="_blank">pixel-based information</a>, higher order segmentation¹ requires a more nuanced understanding of the content, such as the narrative or emotional arcs. Furthermore, some cues can be better inferred from modalities other than the video, e.g. the screenplay or the audio and dialogue track. Scene boundary detection, in particular, is the task of identifying the transitions between scenes, where a scene is defined as a continuous sequence of shots that take place in the same time and location (often with a relatively static set of characters) and share a common action or theme.</p>
<p><a href="https://netflixtechblog.com/detecting-scene-changes-in-audiovisual-content-77a61d3eaad6">Read More</a></p>