home



CuZero

Guided Search for Zero-Latency Interaction with Diverse Video Content



Objectives

CuZero is a complete search system combines a guided query formulation stage and an intuitively organized query space navigation stage. In this page, a few of key points of CuZero have been highlighted, but we encourage you to view the video examples, CuZero publications, or releated documents for a more in-depth exploration of ongoing CuZero evaluations.

Query Formulation

CuZero facilitates informed user search with instant feedback of query suggestions based on the user's current query input. CuZero embraces guided user interaction by providing knowledge about the contents of the indexed database and allowing the user to quickly change his or her query to best fit the target dataset and conveyed system knowledge. Leveraging a large library of visual analytics (semantic concepts), CuZero allows a paradigm shift from searching with textual query topics to visual concepts. Suggestion feedback from many mapping modalities is asynchronously executed and displayed after each word break. Automatic mapping from query topic to analytic models is accomplished using techniques from lexical (direct name or definition matching, WordNet), statistical (co-occurrence or mutual information on a training set), and data mining (dominant analytic scores after automatic text search) research in the information retrieval. During query formulation, a navigation map is dynamically populated and its results are retrieved in the background while the user contemplates other query revisions. Within the navigation map, weights are dynamically computed according to a layout that is fully dynamic and determined by the user.

Intuitive Query Space Navigation

CuZero presents a novel way to quickly combine several query entry points from different modalities, including semantic concepts, visual examples, and other metadata and facilitates both a wide-breadth and in-depth exploration of results. The user is encouraged to freely traverse an intuitively visualized multi-dimensional query space to select the best permutations of different search criteria. During this traversal, search results of each permutation are instantly updated on the screen. A continuous scrolling technique dynamically retreives more result images in the main browsing window as the user approaches the end of the cached results, eliminating the conscious task of clicking 'next' and waiting for more results. Passive observations about "seen" but not explicitly marked labels are recorded to assist in active learning and query expansion procedures.

Technical Evaluations

CuZero was a contender in several performance based evaluations and technical demos such as TRECVID2008 interactive search, the VideOlympics 2008 (winner "most informative query interface"), ICASSP 2009, and CVPR 2009. In late 2008, CuZero had indexed and searched over 345 hours of broadcast news video, 285 hours of documentary video, 10 hours UAV surveillance video, and 14k geo-tagged Flickr images. Whenever possible, CuZero was constructed to scale to thousands of concept models, indexing modalities, and feature representations. Poised for future environments like mobile platforms, CuZero is also modular, pipelined, and distributed facilitating the addition of new models and functions.

CuZero Demo & Presentations

This video was shown with a live demonstration at CVPR 2009. Numerous demonstrations for query formulation and result exploration are included over several of the datasets evaluated in the CuZero system.

 

Click on the image below to start the demo video...

 

This video is a narrated discussion of the CuZero system. A brief background and in-line recorded demonstrations explain CuZero's two main thrusts: guided query formulation and query space navigation. An extended, technical presentation of CuZero was given at MIR2008 and is available as a PDF.

 

Click on the image below to start the overview video...

Get the Flash Player to see this player.

 
 

People

The CuZero system was first conceived in the summer of 2007 and developed by Eric Zavesky and Shih-Fu Chang in 2007-2008. With origins in the CuVid video search engine, parts of the back-end system utilize revisions developed with Lyndon Kennedy to implement a concept-based video query module and also incorporate work by Akira Yanagawa, Yu-Gang Jiang, and Winston Hsu who contributed to the development of classifiers such as the Columbia374 and the improved Columbia-VIREO374 models for detecting semantic concepts in videos. The interface and asynchronous design of CuZero was implemented by Eric Zavesky, circa 2008.

 

Publications

E. Zavesky, S.-F Chang. CuZero: Embracing the Frontier of Interactive Visual Search for Informed Users. In ACM Multimedia Information Retrieval, Vancouver, British Columbia, Canada, October 2008. [PDF]

S.-F. Chang, L. Kennedy, E. Zavesky. Columbia University s Semantic Video Search Engine. In ACM International Conference on Image and Video Retrieval, Amsterdam, Netherlands, July 2007. [PDF]

A. Yanagawa, S.-F. Chang, L. Kennedy, W. Hsu, Columbia University's Baseline detectors for 374 LSCOM Semantic Visual Concepts. Columbia University ADVENT Technical Report #222-2006-8, March 20, 2007. [PDF]

Yu-Gang Jiang, Akira Yanagawa, Shih-Fu Chang, Chong-Wah Ngo, CU-VIREO374: Fusing Columbia374 and VIREO374 for Large Scale Semantic Concept Detection, Columbia University ADVENT Technical Report #223-2008-1, Aug. 2008. [PDF]

 

Related Projects

Columbia374: Columbia University's Baseline Detectors for 374 LSCOM Semantic Visual Concepts

CU-VIREO-374: Updated LSCOM classifiers from Columbia University and VIREO (Video Retrieval Group at City University of Hong Kong).

Visual Islands: Intuitive Browsing of Visual Search Results

Columbia University at TRECVID 2006: Semantic Visual Concept Detection and Video Search

CuVid Video Search Engine 2005: Columbia News Video Search Engine and TRECVID 2005 Evaluation

Image Near-Duplicate Detection by Part-based Learning: Detecting Image Near-Duplicate for Linking Multimedia Content

Automatic Feature Discovery in Video Story Segmentation: Automatic Feature Discovery in Video Story Segmentation

 

For problems or questions regarding this web site contact the web master.
Last updated: