EE ZOOM Seminar: Can Implicit Bias Explain Generalization?
+ מספר ת.ז. בצ'אט
Join Zoom Meeting
https://zoom.us/j/94007495169
Meeting ID: 940 0749 5169
Speaker: Assaf Dauber
M.Sc. student under the supervision of Prof. Meir Feder
Wednesday, May 13th, 2020 at 15:00
Can Implicit Bias Explain Generalization?
Abstract
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algorithm towards a certain structured solution that often generalizes well. Recently, several papers have studied implicit regularization and were able to identify this phenomenon in various scenarios.
In this seminar, we revisit this paradigm in arguably the simplest non-trivial setup, and study the implicit bias of Stochastic Gradient Descent (SGD) in the context of Stochastic Convex Optimization. As a first step, we provide a simple construction that rules out the existence of a distribution-independent implicit regularizer that governs the generalization ability of SGD.
We then demonstrate a learning problem that rules out a very general class of distribution-dependent implicit regularizers from explaining generalization, which includes strongly convex regularizers as well as non-degenerate norm-based regularizations. Certain aspects of our constructions point out to significant difficulties in providing a comprehensive explanation of an algorithm's generalization performance by solely arguing about its implicit regularization properties.