%0 Conference Proceedings %F scnn_shou_wang_chang_cvpr16 %A Shou, Zheng %A Wang, Dongang %A Chang, Shih-Fu %T Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs %B IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %X We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposalnetwork identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and achieve high temporal localization accuracy. In the end, only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7\% to 7.4\% on MEXaction2 and increases from 15.0\% to 19.0\% on THUMOS 2014 %U http://dvmmweb.cs.columbia.edu/files/dvmm_scnn_paper.pdf %D 2016