Predicting User Dissatisfaction with Internet Application Performance at End-Hosts<-- Return to the list
Start Time: 11:00am
End Time: 12:00pm
Speaker: Prof. Renata Teixeira
From: LIP6, CNRS
Location: CS Open Meeting Area (CSB 477)
Hosted by: Columbia Joint CS/EE Networking Seminar
Abstract: Have you ever had your calls dropped on Skype? Have you ever had a video on YouTube freeze up on you? Network disruptions and dynamic network conditions can adversely impact end-user experience. These issues can frustrate users who are oblivious of the underlying causes, but have to deal with the resulting degradations. In an ideal world, user devices would have the capability to automatically detect and troubleshoot network performance problems or and provide contextual feedback to users when fixes are not available. There is much interest recently in doing automated diagnosis on user laptops and desktops. One interesting aspect of performance diagnosis that has received little attention is the user perspective on performance.
In this talk, we present HostView, a data measuring tool that collects network traffic headers and related information at end-hosts. Importantly, Hostview includes mechanisms for users to rate their perceived network conditions, and is a key departure from previous work in the area. We discuss the design tradeoffs in building HostView (overhead, privacy and user annoyance) and the challenges in collecting such data. Then, we present our efforts in developing predictors of user dissatisfaction with Internet application performance. We train these predictors using network performance data annotated with user feedback collected with HostView from the machines of 19 users. The main challenges of modeling user dissatisfaction with network performance comes from the scarcity of user feedback and the fact that poor performance episodes are rare. We develop a methodology to build training sets in face of these challenges. Then, we show that predictors based on non-linear support vector machine achieve higher true positive rates than predictors based on linear models. Our predictors consistently achieve true positive rates above 0.9. Finally we quantify the benefits of building per-application predictors over building general predictors that try to anticipate user dissatisfaction across multiple applications.
Speaker Bio: Renata Teixeira is currently a researcher at the Laboratoire d'Informatique de Paris 6, the Computer Science Department of Université Pierre-et-Marie-Curie. She completed her Ph.D. at the University of California San Diego in 2005. During her doctoral studies, she split her time between San Diego and AT&T Labs - Research in New Jersey where she analyzed the impact of intradomain routing on BGP. Previously, she worked on characterizing path diversity in IP networks. Prior to her studies in the US, Renata was a student at the Computer Network Research Group at the Federal University of Rio de Janeiro, Brazil (GTA/UFRJ - Grupo de Teleinformática e Automação da UFRJ) where he completed her B.Sc. in Computer Science in 1997 and her M.Sc. in Electrical Engineering in 1999.
Note: To get to the CS Open Area, enter Mudd on the 4th floor and enter the CS Department to the right of the elevators. Walk through the hall until you come to an intersection with a staircase. Turn left and go to the end of the hall (don't go upstairs).