|
Research
Prof. Eleftheriadis is a member of the ADVENT Group at Columbia
University, a university and industry research partnership investigating all
areas of image and video technology.
His research interests are in the area of media representation, with special emphasis on
multimedia software, video signal processing and compression, video
communication systems (including video-on-demand and Internet video), and the
mathematical fundamentals of compression.
The following is a list of projects currently being pursued. For papers
related to the projects below (as well as others), please visit the Publications section.
- Flavor
- Flavor ("Formal Language for Audio-Visual Object Representation") is an
object-oriented programming language targeted for media-intensive
applications. It is an extension of C++ and Java where the typing system is
extended to include bitstream representation semantics. Flavor is currently
used in the ongoing MPEG-4 standardization activity for the representation of
the specification's bitstream syntax. For more information (including
downloadable software), visit the Flavor web site or the MPEG-4 Systems web site.
Ph.D. Students: Yihan Fang
- MPEG-4 Systems
- This work examines the architecture and implementation of object-based
audio-visual terminals and communication systems. Under this activity we are
developing MPEG-4 player software, servers, editors, multiplexers,
as well as original MPEG-4 content. The results of this work, in collaboration
with Lockheed Martin and Xbind, have been
demonstrated in the October 1998 MPEG meeting in Altantic City, as well as in
the Electronic Imaging 99 Conference in San Jose in February 1999. In
both cases the demonstration involved streamed transmission of MPEG-4 content
from a server, over a satellite, with playback on a Windows PC.
Ph.D. Students: Aizaz Akhtar, Lai-Tee Cheok, Hari Kalva, Hong Shi
-
- Spatio-Temporal Model-Assisted and Activity-Assisted Texture and Shape Coding
- This project extends our prior work in the area of model-assisted
coding to include both the spatial and temporal dimensions. Model-assisted coding utilizes
robust automated techniques to detect areas of perceptual importance in video
sequences (e.g., face, eyes, mouth). Detection is followed by intelligent rate
control that allocates more bits to such areas, for increased perceived visual
quality. Spatio-temporal model-assisted coding
applies this technique in both the spatial and temporal dimensions. It thus
allows different areas of a video sequence to have both different spatial
"resolutions" as well as different temporal ones (frame rate). The allocation
is governed by spatial and temporal balance equations. This
technique is particularly suitable for very low bit rate coding
applications. (64 Kbps and below). Most recenty we extended this work to
rely on activity indicators, rather than a model, which makes the technique
applicable to a much wider range of content. Our extensions cover shape coding
as well. Since the technique only affects the rate controller at the encoder,
it can be used in a fully compatible way with practically all standards,
including MPEG-1/2/4 as well as H.261 and H.263.
Ph.D. Students: Jae-Beom Lee
-
- Complexity Distortion Theory
- We are developing the foundations of a new theory for media
representation called "Complexity Distortion Theory". It combines the notions
of objects and programmable decoders by merging traditional Kolmogorov
Complexity theory and Rate Distortion theory. It thus completes the circle of
deterministic and stochastic approaches for information representation, by
providing the means to analyze algorithmic representation where distortions
are allowed. We have already proven that the bounds predicted by the new
theory for stochastic sources are identical to those provided by traditional
Rate Distortion theory, and are working towards practical applications of
these results. Of particular interest are problems with resource bounds, i.e.,
limited time or space (memory). The use of a Turing machine at the core of the
problem's formulation provides a natural framework to pose and attack such
problems.
-
- Ph.D. Students: Daby Sow
-
- Video Segmentation using Semi-Automatic Techniques
- Segmentation of digital video using depth cameras.
Ph.D. Students: Huitao Luo
-
- Depth-Based Video Segmentation
- Segmentation of digital video using depth cameras. We are treating the
visual component as consisting of a 4-dimenstionsal signal: RGBD. The
inclusion of a high-resolution depth component can play a crucial role for
fine segmentation of visual content, suitable for the extraction of objects
and their subsequent shape coding (e.g., in MPEG-4). The work is facilitated
by recent advances in real-time depth cameras, by Columbia's CAVE
laboratory as well as Eastman Kodak.
Ph.D. Students: Mei Shi
- Internet Video [Completed]
- This project utilized a technique called Dynamic Rate Shaping,
which provides a fast procedure for on-the-fly modification of the bit rate of
compressed MPEG-1 and MPEG-2 video to meet prescribed bandwidth
constraints. This project combined rate shaping with intelligent network rate
estimation, to allow extremely smooth playback of video at high frame rates
without overloading IP-based networks. The rate controller utilized TCP flow
control (but without error control) in order to make the video traffic compete
fairly with other traffic sharing the network.
-
- Ph.D. Students: Stephen Jacobs
-
Doctoral Students
The following doctoral students are currently involved in these
projects. Several MS and undergraduate students also participate in our
research activities.
Aizaz Akhtar
Yihan Fang
Hiroshi Ito (Mitsubishi, on leave)
Hari Kalva
Jae-Beom Lee
Huitao Luo
Hong Shi (Bell Atlantic)
Mei Shi
Daby Sow
Recent Ph.D. Graduates
Steve Jacobs (Kodak Fellow), 5/1998.
A. Eleftheriadis,
[email protected]
03/04/99
Back
to Home Page
|