home

Content-based Utility Prediction & Video Adaptation


Project's Home Page | Current Research Areas > Pervasive Media >



 

Summary
bar

This project focuses on the application of Universal Multimedia Access (UMA) and development of video adaptation methods in order to meet diverse requirements of various terminals, networks and user interests. By video adaptation, we refer to different possible schemes for changing the representation and coding of the video streams such as resolution, temporal rate, bandwidth, duration or others. We envision an architecture in which such adaptation processes can be embedded in the intermediate proxy or at the server. We also envision such adaptation processes can be done in real time to support live video applications.

Challenging issues in realizing an efficient adaptation system involve the following:

  • provision of efficient algorithms and implementations of different adaptation dimensions
  • understanding how different adaptation methods and their combinations affect the video quality and required computation resources such as power and computational engine
  • estimation and optimization of the tradeoffs in the above relationships

In this project, we propose a framework in which adaptation-resource-quality relations are modeled by the utility function. We argue these exists strong correlation between such utility functions and the content characteristics. Video clips sharing similar characteristics (e.g., objects, scenes, motions) also share similar utility functions.

Specifically, we apply the above utility-fucntion framework and demonstrate a content-adpative utility-based MPEG-4 transcoding system. Content features like complexity and motion are extracted from each incoming video segment and used to predict the utility function based on pre-trained classifiers. The optimal adaptation operator among all possible options (such as frame dropping and/or coefficient dropping from MPEG-4 sequences) is then automatically selected based on the predicted utility function. Our extensive experiments show very accurate prediction of the utility function as well as the optimal operator. More importantly, the whole process of feature extraction, classification, and prediction can be done in real time without needing to use exhaustive comparison of different options.

The system architecture of the utility function prediction is shown in the figure below.


The picture below is the screen shot of our live demo system, which simulates the real time utility function prediction procedure. It also shows the extracted features, dynamic network condition, comparison of the actual utility function and predicted one, and comparison of the final transcoded video quality.

The detailed description of the project can be found at the project site. We are currently extending the research to incorporate subjective quality modeling and power estimation.

People
bar

Yong Wang          

Shih-Fu Chang

in collaboration with

Dr. Alex Loui of Eastman Kodak

Jae-Gon Kim (Electronics and Telecommunications Research Institute (ETRI), Korea)        

Demo
bar

 

Publication
bar

Y. Wang, J.-G. Kim, and S.-F. Chang, "Content-based utility function prediction for real-time MPEG-4 transcoding," ICIP'2003, Barcelona, Spain, 14-17 Sep 2003. [PDF]

J.-G. Kim, Yong Wang, and S.-F. Chang, "Content-Adaptive Utility Based Video Adaptation", ICME 2003 July 6-9, 2003, Baltimore, MD. [PDF]

J.-G. Kim, Y. Wang, S.-F. Chang, K. Kang, J. Kim, "Description of utility function based optimum transcoding," ISO/IEC JTC1/SC29/WG11 M8319 Fairfax May 2002. [PDF]

Download
bar

To be available...

bar

For problems or questions regarding this web site contact The Web Master.
Last updated: June 14, 2003.