Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |


Jiawei Chen, Yin Cui, Guangnan Ye, Dong Liu, Shih-Fu Chang. Event-Driven Semantic Concept Discovery by Exploiting Weakly Tagged Internet Images. In ACM International Conference on Multimedia Retrieval (ICMR), full paper (oral), 2014.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Analysis and detection of complex events in videos require a semantic representation of the video content. Existing video semantic representation methods typically require users to pre-define an exhaustive concept lexicon and manually annotate the presence of the concepts in each video, which is infeasible for real-world video event detection problems. In this paper, we propose an automatic semantic concept discovery scheme by exploiting Internet images and their associated tags. Given a target event and its textual descriptions, we crawl a collection of images and their associated tags by performing text based image search using the noun and verb pairs extracted from the event textual descriptions. The system first identifies the candidate concepts for an event by measuring whether a tag is a meaningful word and visually detectable. Then a concept visual model is built for each candidate concept using a SVM classifier with probabilistic output. Finally, the concept models are applied to generate concept based video representations. We use the TRECVID Multimedia Event Detection (MED) 2013 as our video test set and crawl 400K Flickr images to automatically discover 2,000 visual concepts. We show significant performance gains of the proposed concept discovery method over different video event detection tasks including supervised event modeling over concept space and semantic based zero-shot retrieval without training examples. Importantly, we show the proposed method of automatic concept discovery outperforms other well-known concept library construction approaches such as Classemes and ImageNet by a large margin (228%) in zero-shot event retrieval. Finally, subjective evaluation by humans also confirms clear superiority of the proposed method in discovering concepts for event representation


Guangnan Ye
Dong Liu
Shih-Fu Chang

BibTex Reference

   Author = {Chen, Jiawei and Cui, Yin and Ye, Guangnan and Liu, Dong and Chang, Shih-Fu},
   Title = {Event-Driven Semantic Concept Discovery by Exploiting Weakly Tagged Internet Images},
   BookTitle = {ACM International Conference on Multimedia Retrieval (ICMR), full paper (oral)},
   Year = {2014}

EndNote Reference [help]

Get EndNote Reference (.ref)


For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).