Speaker: Prof. Siddharth Garg , New York University
Abstract:
How can users trust that their everyday computing platforms, CPUs, GPUs and ASICs, are performing computations privately and correctly? Users have good reason to skeptical of the trustworthiness of existing computing systems. For one, most computing hardware is increasingly manufactured off-shore at one of only a few advanced semiconductor foundries. Malicious foundries might pirate and black-market a chip, or potentially even modify its functionality (i.e., insert a hardware Trojan). The first part of this talk will cover my work on how chips can be securely fabricated at untrusted off-shore foundries; the hope is to reap the benefits of advanced manufacturing technology (which might only be available off-shore) without compromising trust. Another reason for skepticism is the move towards edge and cloud computing. Increasingly, expensive computations are outsourced to (a potentially untrusted) cloud, and particularly so with modern machine learning computations that rely on deep learning. The second part of this talk will cover my work on how outsourced training of deep neural networks introduces new security vulnerabilities, particularly the threat of a backdoored neural network (or BadNets), and how these vulnerabilities can be mitigated. I will conclude by highlighting on-going work in my lab on designing robust and secure deep learning accelerators.