Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |

jiang2018modeling

Yu-Gang Jiang, Zuxuan Wu, Jinhui Tang, Zechao Li, Xiangyang Xue, Shih-Fu Chang. Modeling multimodal clues in a hybrid deep learning framework for video classification. IEEE Transactions on Multimedia, 20(11):3137-3147, 2018.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Videos are inherently multimodal. This paper studies the problem of how to fully exploit the abundant multimodal clues for improved video categorization. We introduce a hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two Long Short Term Memory networks with extracted appearance and motion features as inputs. Finally, we also propose to refine the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that (1) LSTM networks which model sequences in an explicitly recurrent manner are highly complementary with CNN models; (2) the feature fusion network which produces a fused representation through modeling feature relationships outperforms alternative fusion strategies; (3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks, the UCF-101 and the Columbia Consumer Videos (CCV), provide strong quantitative evidence that our framework achieves promising results: 93.1% on the UCF-101 and 84.5% on the CCV, outperforming competing methods with clear margins

Contact

Yu-Gang Jiang
Zhenguo Li
Shih-Fu Chang

BibTex Reference

@article{jiang2018modeling,
   Author = {Jiang, Yu-Gang and Wu, Zuxuan and Tang, Jinhui and Li, Zechao and Xue, Xiangyang and Chang, Shih-Fu},
   Title = {Modeling multimodal clues in a hybrid deep learning framework for video classification},
   Journal = {IEEE Transactions on Multimedia},
   Volume = {20},
   Number = {11},
   Pages = {3137--3147},
   Publisher = {IEEE},
   Year = {2018}
}

EndNote Reference [help]

Get EndNote Reference (.ref)

 
bar

For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).