Model-Assisted Coding of Videoteleconferencing Sequences at Low Bit Rates

Alexandros Eleftheriadis1 and Arnaud Jacquin2
1 Department of Electrical Engineering, Columbia University
2 AT&T Bell Laboratories

Proceedings, IEEE International Symposium on Circuits and Systems, Longon, England, May-June 1994, pp. 3.177-3.180

Abstract

We present a novel and practical way to integrate techniques from computer vision to low bit rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a "classical" video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of occlusion by moving objects. Face location information is exploited by a low bit rate 3D subband-based video coder which uses a model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input luminance signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility.

Electronic file not available.