Speaker: Prof. Sungjoo Woo,Seoul National University
Bit-width needs to be minimized for efficient neural network design in terms of chip area, code size, and, most importantly, energy efficiency. In this talk, we first review state-of-theart quantization methods in the industry and academia and introduce our ideas of outlier quantization and precision highway. We also briefly report the current work on a 4-bit linear weight/activation quantization of MobileNet v3.
Sungjoo Yoo (Mâ€™00) received Ph.D. degree from Seoul National University, Seoul, South Korea, in 2000. From 2000 to 2004, he was a Researcher with SLS Group, TIMA Laboratory, Grenoble, France.,From 2004 to 2008, he was a Senior/Principal Engineer with System LSI, Samsung Electronics, Suwon, South Korea. From 2008 to 2015, he was an Assistant/Associate Professor with POSTECH, Pohang, South Korea. In 2015, he joined Seoul National University, Seoul, South Korea. His current research interests include memory/storage subsystem and software/hardware optimizations for deep neural networks including network compression, quantization, and hardware accelerators.