Perceptual Knowledge Construction from Annotated Image Collections

Ana B. Benitez and Shih-Fu Chang

Abstract

This paper presents and evaluates new methods for extracting perceptual knowledge from collections of annotated images. The proposed methods include automatic techniques for constructing perceptual concepts by clustering the images based on visual and text feature descriptors, and for discovering perceptual relationships among the concepts based on descriptor similarity and statistics between the clusters. The two main contributions of this work lie on the support and the evaluation of several techniques for visual and text feature descriptor extraction, for visual and text feature descriptor integration, and for data clustering in the extraction of perceptual concepts; and on proposing novel ways for discovering perceptual relationships among concepts. Experiments show extraction of useful knowledge from visual and text feature descriptors, high independence between visual and text feature descriptors, and potential performance improvement by integrating both kinds of descriptors compared to using either kind of descriptors alone.