Jump to : Download | Abstract | Contact | BibTex reference | EndNote reference |


Alejandro Jaimes, Shih-Fu Chang. Learning Structured Visual Detectors From User Input at Multiple Levels. Invited Paper, International Journal of Image and Graphics (IJIG), Special Issue on Image and Video Databases, 1(3):415-44, August 2001.

Download [help]

Download paper: Adobe portable document (pdf)

Copyright notice:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


In this paper, we propose a new framework for the dynamic construction of structured visual object/scene detectors for content-based retrieval. In the Visual Apprentice, a user defines visual object/scene models via a multiple-level definition hierarchy: a scene consists of objects, which consist of object-parts, which consist of perceptual-areas, which consist of regions. The user trains the system by providing example images/videos and labeling components according to the hierarchy she defines (e.g., image of two people shaking hands contains two faces and a handshake). As the user trains the system, visual features (e.g., color, texture, motion, etc.) are extracted from each example provided, for each node of the hierarchy (defined by the user). Various machine learning algorithms are then applied to the training data, at each node, to learn classifiers. The best classifiers and features are then automatically selected for each node (using cross-validation on the training data). The process yields a Visual Object/Scene Detector (e.g., for a handshake), which consists of an hierarchy of classifiers as it was defined by the user. The Visual Detector classifies new images/videos by first automatically segmenting them, and applying the classifiers according to the hierarchy: regions are classified first, followed by the classification of perceptual-areas, object-parts, and objects. We discuss how the concept of Recurrent Visual Semantics can be used to identify domains in which learning techniques such as the one presented can be applied. We then present experimental results using several hierarchies for classifying images and video shots (e.g., Baseball video, images that contain handshakes, skies, etc.). These results, which show good performance, demonstrate the feasibility, and usefulness of dynamic approaches for constructing structured visual object/scene detectors from user input at multiple levels


Alejandro Jaimes
Shih-Fu Chang

BibTex Reference

   Author = {Jaimes, Alejandro and Chang, Shih-Fu},
   Title = {Learning Structured Visual Detectors From User Input at Multiple Levels},
   Journal = {Invited Paper, International Journal of Image and Graphics (IJIG), Special Issue on Image and Video Databases},
   Volume = {1},
   Number = {3},
   Pages = {415--44},
   Month = {August},
   Year = {2001}

EndNote Reference [help]

Get EndNote Reference (.ref)


For problems or questions regarding this web site contact The Web Master.

This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).