EE Seminar: Fundamental limits of modern machine learning and how to get around them
(The talk will be given in English)
Speaker: Dr. Yair Carmon
Stanford University
Monday, December 23rd, 2019
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering
Fundamental limits of modern machine learning and how to get around them
Abstract
The computational barriers concern nonconvex optimization: we prove unconditional lower bounds on the oracle complexity of finding stationary points using (stochastic) gradient methods. To bypass these lower bounds, we develop a “convex until proven guilty” principle that leverages additional smoothness available for many problems. Combining it with classical momentum techniques, we obtain an accelerated method for nonconvex optimization.
The statistical barriers concern the sample complexity of adversarially robust learning. We prove that unlabeled data allows us to circumvent an information theoretic gap between robust and standard classification in a Gaussian model. Our analysis directly leads to a general robust self-training procedure; we use it to improve state-of-the-art performance on the challenging and extensively studied CIFAR-10 adversarial robustness benchmark by up to 7 points.
Short Bio
Yair Carmon is a PhD student at Stanford University working with Prof. John Duchi and Prof. Aaron Sidford. His research focuses on the foundations of machine learning and draws on optimization, algorithm analysis, statistics and information theory. Yair received an M.Sc. from the Technion in 2015, under the supervision of Prof. Shlomo Shamai and Prof. Tsachy Weissman. Between 2009 and 2015 he held several positions as an algorithm engineer and research team leader, working on communications, signal processing and computer vision. His awards include the Stanford Graduate Fellowship, the Numerical Technologies Fellowship, and membership in the Technion Excellence Program.