%0 Conference Proceedings %F mazloom2015encoding %A Mazloom, Masoud %A Habibian, Amirhossein %A Liu, Dong %A Snoek, Cees GM %A Chang, Shih-Fu %T Encoding Concept Prototypes for Video Event Detection and Summarization %B International Conference on Multimedia Retrieval (ICMR) %X This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that learns a set of relevant frames as the concept prototypes from web video examples, without the need for frame-level annotations, and use them for representing an event video. We formulate the problem of learning the concept prototypes as seeking the frames closest to the densest region in the feature space of video frames from both positive and negative training videos of a target concept. We study the behavior of our video event representation based on concept prototypes by performing three experiments on challenging web videos from the TRECVID 2013 multimedia event detection task and the MED-summaries dataset. Our experiments establish that i) Event detection accuracy increases when mapping each video into concept prototype space. ii) Zero-example event detection increases by analyzing each frame of a video individually in concept prototype space, rather than considering the holistic videos. iii) Unsupervised video event summarization using concept prototypes is more accurate than using video-level concept detectors %U http://www.ee.columbia.edu/ln/dvmm/publications/15/mazloom2015encoding.pdf %D 2015