%0 Conference Proceedings %F cdc_shou_cvpr17 %A Shou, Zheng %A Chan, Jonathan %A Zareian, Alireza %A Miyazawa, Kazuyuki %A Chang, Shih-Fu %T CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos %B Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %X Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segmentlevel classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc %U http://dvmmweb.cs.columbia.edu/files/CVPR17_Zheng_CDC.pdf %D 2017