In this project, we seek to develop and demonstrate a platform for personalized television news to replace the traditional one-broadcast-fits-all model. We forecast that next-generation video news consumption will be more personalized, device agnostic, and pooled from many different information sources. The technology for our project represents a major step in this direction, providing each viewer with a personalized newscast with stories that matter most to them. We believe that such a model can provide a vastly superior user experience and provide fine-grained analytics to content providers. While personalized viewing is increasingly popular for text-based news, personalized real-time video news streams are a critically missing technology.
Our personalized news platform will analyze readily available user data, such as recent viewing history and social media profiles. Suppose the viewer has recently watched the Republican presidential candidates debate held in Arizona, an interview with a candidate’s campaign manager, and another interview with the candidate himself. The debate and the candidate’s interview are “liked” by the viewer and several friends on Facebook. This evidence points to a high likelihood that a future video story about the Republican presidential race will interest the viewer. The user’s personalized news stream will feature high-quality, highly-relevant stories from multiple channels that cover the latest developments in the presidential race.
Our framework is rooted on the goal of making the “smart” behind “Smart TV” happen. In our prototype, a computer and monitor simulate the combined hardware of a television and set-top box. A distributed cloud computing service platform deployed at both the Columbia and Stanford campuses serves as the backbone architecture. Arrays of TV tuners act as the video stream collection agents for our computing platform. The computing paradigm of our system is rooted in integrating multi-modal data components as well as meta-data. Our content-informed framework leverages a wide range of data forms including video, image, audio, text, closed-caption and social media. The final prototype will feature a recommendation and summarization system with near real-time video analysis based on multi-modal topic segmentation/linking, multimedia indexing, and immersive user interfaces.
To achieve this Personalized Television News system, our bicoastal team leverages our extensive research experience in story segmentation, visual concept detection, topic linking, news videos augmentation, and mobile visual search. Our team possesses a solid record of building functional prototypes that demonstrate our research visions.
We are excited to be pushing the frontier of this emerging technology and grateful for the support of the Brown Institute of Media Innovation. Please check out the rest of the site for more information!