Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |

ACMMM15:Ye

Guangnan Ye, Yitong Li, Hongliang Xu, Dong, Liu, Shih-Fu Chang. EventNet: A large scale Structured Concept Library for Complex Event Detection in Video. In ACM International Conference On Multimedia (ACM MM), Brisbane, Australia, October 2015.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Event-specific concepts are the semantic concepts specifically designed for the events of interest, which can be used as a mid-level representation of complex events in videos. Existing methods only focus on defining event-specific concepts for a small number of pre-defined events, but cannot handle novel unseen events. This motivates us to build a large scale event-specific concept library that covers as many real-world events and their concepts as possible. Specifically, we choose WikiHow, an online forum containing a large number of how-to articles on human daily life events. We perform a coarse-to-fine event discovery process and discover 500 events from WikiHow articles. Then we use each event name as query to search YouTube and discover event-specific concepts from the tags of returned videos. After an automatic filter process, we end up with 95, 321 videos and 4, 490 concepts. We train a Convolutional Neural Network (CNN) model on the 95, 321 videos over the 500 events, and use the model to extract deep learning feature from video content. With the learned deep learning feature, we train 4, 490 binary SVM classifiers as the event-specific concept library. The concepts and events are further organized in a hierarchical structure defined by WikiHow, and the resultant concept library is called EventNet. Finally, the EventNet concept library is used to generate concept based representation of event videos. To the best of our knowledge, EventNet represents the first video event ontology that organizes events and their concepts into a semantic structure. It offers great potential for event retrieval and browsing. Extensive experiments over the zero-shot event retrieval task when no training samples are available show that the proposed EventNet concept library consistently and significantly outperforms the state-of-the-art (such as the 20K ImageNet concepts trained with CNN) by a large margin up to 207\%. We will also show that EventNet structure can help users find relevant concepts for novel event queries that cannot be well handled by conventional text based semantic analysis alone. The unique two-step approach of first applying event detection models followed by detection of event-specific concepts also provides great potential to improve the efficiency and accuracy of Event Recounting since only a very small number of event-specific concept classifiers need to be fired after event detection

Contact

Guangnan Ye
Yinxiao Li
Dong Liu
Shih-Fu Chang

BibTex Reference

@InProceedings{ACMMM15:Ye,
   Author = {Ye, Guangnan and Li, Yitong and Xu, Hongliang and Liu, Dong, and Chang, Shih-Fu},
   Title = {EventNet: A large scale Structured Concept Library for Complex Event Detection in Video},
   BookTitle = {ACM International Conference On Multimedia (ACM MM)},
   Address = {Brisbane, Australia},
   Month = {October},
   Year = {2015}
}

EndNote Reference [help]

Get EndNote Reference (.ref)

 
bar

For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).