%0 Journal Article %F tcsvt:actioncontext %A Jiang, Yu-Gang %A Li, Zhenguo %A Chang, Shih-Fu %T Modeling Scene and Object Contexts for Human Action Retrieval with Few Examples %J IEEE Transactions on Circuits and Systems for Video Technology %V 21 %P 674-681 %X The use of context knowledge is critical for understanding human actions, which typically occur under particular scene settings with certain object interactions. For instance, "driving car" usually happens outdoors, and "kissing" involves two people moving towards each other. In this paper, we investigate the problem of context modeling for human action retrieval. We first identify ten simple object-level action atoms relevant to many human actions, e.g., "people getting closer". With the action atoms and several background scene classes, we show that action retrieval can be improved through modeling action-scene-object dependency. An algorithm inspired by the popular semi-supervised learning paradigm is introduced for this purpose. One important contribution of this work is to show that modeling the dependencies among actions, objects, and scenes can be efficiently achieved with very few examples. Such a solution has tremendous potential in practice as it is often expensive to acquire large sets of training data. Experiments were performed on the challenging Hollywood2 dataset containing 89 movies. The results validate the effectiveness of our approach, achieving a mean average precision of 26% with just 10 examples per action %U http://www.ee.columbia.edu/ln/dvmm/publications/11/tcsvt11_actioncontext.pdf %8 May %D 2011