IBM Photo
.
Ching-Yung Lin -- Projects

IBM

 

[ This page is under construction!! ]


PART IV: MULTIMEDIA INFORMATION MANAGEMENT

 My main focus on multimedia semantic analysis research is on automatic content learning, detection and recognition technologies for video data sources that include video scenes of various indoor and outdoor activities involving people, meetings, and vehicles, and TV news broadcasts. For applications, our goal is to achieve (1) significant improvement in indexing and retrieval performance for video data; (2) autonomous video understanding; (3) ancillary improvement for still image processing; (4) enabling technologies for video data mining, filtering and selection; and (5) a drastic reduction in volume for video storage. My researches would be applied on both the commercial uses of multimedia data management and the security uses of intelligence analysis. 

 In order to achieve these application objectives, we are investigating researches on forwarding mechanisms at fully automatic video indexing based on image, text, and audio content, which provides low coast video corpus marking and preparation. We are also developing methods in robust person and text detection and recognition, event detection, recognition and understanding. Applying these semantic analysis techniques, we try to develop efficient methods for representing content and make efforts in cross media content search and extraction.


Multimedia Semantic Detection Demo


Multimedia Search and Retrieval Demo



[Video Summarization Mark]    Multimedia Semantic Adaptation and Summarization


     With the growing amount of multimedia content, people become more willing to view personalized multimedia based on their usage environments. When people use their pervasive devices, they generally restrict their viewing time on the limited displays and minimize the amount of interaction and navigation to get to the content. When they browse video on the Internet, they may want to get only the videos that match their preferences.  Because of the existence of heterogeneous user clients and data sources, it is a real challenge to implement a universally compliant system that fits various usage environments. We proposed a video personalization system, which is comprised of three major components, the user client, the database server, and the media middleware. This middleware framework is powered by a personalization engine and an adaptation engine that optimally produce video summaries based on the MPEG-7 metadata descriptions, the MPEG-21 rights expressions, and content adaptability declarations on the server database and the MPEG-7 user preference, MPEG-21 usage environments, and user query at the client devices.

The major component tools: in this system include: VideoSue, VideoEd, VideoAnnEx, and Universal Tuner.

(Collaborators: Belle L. Tseng, John Smith)


[ This page is under construbtion!!]

Last Updated: 01/24/2006