VIDEO SCENE SEGMENTATION USING VIDEO AND AUDIO FEATURES
Hari Sundaram Shih-Fu Chang
Dept. Of Electrical Engineering, Columbia University, 
New York, New York 10027.
Email: {sundaram, sfchang}@ctr.columbia.edu



ABSTRACT
In this paper we present a novel algorithm for video scene segmentation. We model a scene as a semantically consistent chunk of audio-visual data. Central to the segmentation framework is the idea of a finite-memory model. We separately segment the audio and video data into scenes, using data in the memory. The audio segmentation algorithm determines the correlations amongst the envelopes of audio features. The video segmentation algorithm determines the correlations amongst shot key-frames. The scene boundaries in both cases are determined using local correlation minima. Then, we fuse the resulting segments using a nearest neighbor algorithm that is further refined using a time-alignment distribution derived from the ground truth. The algorithm was tested on a difficult data set; the first hour of a commercial film with good results. It achieves a scene segmentation accuracy of 84%.