Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |


Joseph G Ellis, W Sabrina Lin, Ching-Yung Lin, Shih-Fu Chang. Predicting Evoked Emotions in Video. In Multimedia (ISM), 2014 IEEE International Symposium on, Pages 287-294, 2014.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Understanding how human emotion is evoked from visual content is a task that we as people do every day, but machines have not yet mastered. In this work we address the problem of predicting the intended evoked emotion at given points within movie trailers. Movie Trailers are carefully curated to elicit distinct and specific emotional responses from viewers, and are therefore well-suited for emotion prediction. However, current emotion recognition systems struggle to bridge the ¿affective gap¿, which refers to the difficulty in modeling high-level human emotions with low-level audio and visual features. To address this problem, we propose a midlevel concept feature, which is based on detectable movie shot concepts which we believe to be tied closely to emotions. Examples of these concepts are ¿Fight¿, ¿Rock Music¿, and ¿Kiss¿. We also create 2 datasets, the first with shot-level concept annotations for learning our concept detectors, and a separate, second dataset with emotion annotations taken throughout the trailers using the two dimensional arousal and valence model for emotion annotation. We report the performance of our concept detectors, and show that by using the output of these detectors as a mid-level representation for the movie shots we are able to more accurately predict the evoked emotion throughout a trailer than by using low-level features


Joseph Ellis
Ching-Yung Lin
Shih-Fu Chang

BibTex Reference

   Author = {Ellis, Joseph G and Lin, W Sabrina and Lin, Ching-Yung and Chang, Shih-Fu},
   Title = {Predicting Evoked Emotions in Video},
   BookTitle = {Multimedia (ISM), 2014 IEEE International Symposium on},
   Pages = {287--294},
   Year = {2014}

EndNote Reference [help]

Get EndNote Reference (.ref)


For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).