Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |

icmr11:consumervideo

Yu-Gang Jiang, Guangnan Ye, Shih-Fu Chang, Daniel Ellis, Alexander C. Loui. Consumer Video Understanding: A Benchmark Database and An Evaluation of Human and Machine Performance. In Proceedings of ACM International Conference on Multimedia Retrieval (ICMR), oral session, 2011.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Recognizing visual content in unconstrained videos has become a very important problem for many applications. Existing corpora for video analysis lack scale and/or content diversity, and thus limited the needed progress in this critical area. In this paper, we describe and release a new database called CCV, containing 9,317 web videos over 20 semantic categories, including events like "baseball" and "parade", scenes like "beach", and objects like "cat". The database was collected with extra care to ensure relevance to consumer interest and originality of video content without post-editing. Such videos typically have very little textual annotation and thus can benefit from the development of automatic content analysis techniques. We used Amazon MTurk platform to perform manual annotation, and studied the behaviors and performance of human annotators on MTurk. We also compared the abilities in understanding consumer video content by humans and machines. For the latter, we implemented automatic classifiers using state-of-the-art multi-modal approach that achieved top performance in recent TRECVID multimedia event detection task. Results confirmed classifiers fusing audio and video features significantly outperform single-modality solutions. We also found that humans are much better at understanding categories of nonrigid objects such as "cat", while current automatic techniques are relatively close to humans in recognizing categories that have distinctive background scenes or audio patterns

Contact

Yu-Gang Jiang
Guangnan Ye
Shih-Fu Chang

BibTex Reference

@InProceedings{icmr11:consumervideo,
   Author = {Jiang, Yu-Gang and Ye, Guangnan and Chang, Shih-Fu and Ellis, Daniel and Loui, Alexander C.},
   Title = {Consumer Video Understanding: A Benchmark Database and An Evaluation of Human and Machine Performance},
   BookTitle = {Proceedings of ACM International Conference on Multimedia Retrieval (ICMR), oral session},
   Year = {2011}
}

EndNote Reference [help]

Get EndNote Reference (.ref)

 
bar

For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).