Electrical Engineering and Computer Engineering Master’s Students Present Their Work at EE/CE Project Expo ’18
The Columbia EE/CE Master’s Student Projects Expo ’18 took place on December 6, in the Low Library Rotunda. Seventy-nine students and 46 teams presented projects ranging from smart bento boxes to real-time emotion monitoring. Dean Mary Cunningham Boyce stopped by the expo to view the projects.
Students developed their projects in the Internet of Things (IoT) course, taught by Professor Fred Jiang; the Heterogeneous Computing for Signal and Data Processing course, taught by Professor Zoran Kostic; the Optical Systems course, taught by Professor Christine Hendon; as well as MS students working on research projects. Hendon, Jiang, Kostic, and Professor James Teherani served as judges.
“The student project expo,” says Jiang, “gives students an opportunity to present their ideas and class projects to others in a structured manner, as well as to learn about the projects of other classes in the department. Among students in my IoT class, I’ve noticed a greater emphasis on intelligent devices and data analytics for the home and for health and fitness.”
"The MS student expo,” says Teherani, “showcases the incredible capabilities of our students. So many advanced technologies were squeezed into the projects that the expo in Low Library felt more like a tour of a futuristic tech incubator."
Winning first prize was Spooony–Your Smart Spoon, by Jan-Felix Schneider, Tvisha Gangwani, and Sing Pou Lee. “Our motivation,” says Gangwani, “was the realization that many people who want to eat healthfully try to track what they eat. But this requires keying in food consumption on an app or website. We wanted to come up with a solution that was convenient and practical, as well as fun.”
Using a variety of sensors and image-recognition technology, Spooony detects what is placed on it, then sends the images to a cloud server for processing. Spooony can report the number of bites taken and the number of calories consumed. A personal dashboard, available on computer and cell phone, analyzes and displays data on the user’s eating habits.
"As we raced against time,” says Lee, “we had to wrestle with the dilemma of whether to break something that was already working in order to enhance it further. We had to decide which calculated risks were worth taking.”
“The diversity of our backgrounds and viewpoints,” says Schneider, “made our ideas, decisions, and implementation more robust and complete. We learned to respect each other's opinions and ideas and to appreciate each other's strengths. This was as valuable a takeaway as the technical learning.”
The two second prize-winning projects were Disparity Map Creation on a GPU, by Abhyuday Puri and Rahul Subbiah, and Zoom Microscope Based on Focus–Variable Lens, by Rui Chen.
Disparity Map Creation on a GPU quantifies the shift between the two images of a stereo camera, then parallelizes the algorithm on a graphics processing unit (GPU), so the camera can run in real time. Potential applications include 3D model construction and robot navigation.
Microscopes are able to magnify an object clearly only at a specific depth, which is too shallow for a moveable lens group. Zoom Microscope Based on Focus–Variable Lens is filled with a liquid constrained by a film. When different voltages are applied to electrodes on the sides of the lens, the surface radius, and hence the focus length, changes. Thus, the image depth can be controlled by changing the voltage.
The three third prize-winning projects were Smart Baby Crib, by Yiqi Sun, Yixin Man, and Bingyao Shi; Improvement for the Performance of Wide-FOV Lens System, by Haiqiu Yang; and Voice Pathology Audio Detection with Deep Learning, by Ylin Lyu and Zixiao Zhang.
Smart Baby Crib uses non-contact Wi-Fi and a camera to monitor the amplitude and frequency of a sleeping baby’s breath in real time. It then determines whether the breath is normal and sends the data to a cell phone.
Improvement for the Performance of Wide-FOV Lens System uses the optical-design program Zemax to simulate and optimize the performance of a 200-degree lens system.
Physicians and therapists who treat voice disorders must subjectively evaluate their patients’ voice quality. Voice Pathology Audio Detection with Deep Learning trains a neural network with data on various voice types. Once Lyu and Zhang have collected sufficient data, they hope to make the detector available on cell phones.
For more pictures visit the EE Flickr page.
—By Ann Rae Jonas