EE Seminar: Tackling Memorization in Generative Adversarial Machines

29 באוקטובר 2018, 15:00 
חדר 011, בניין כיתות-חשמל 

(The talk will be given in English)

 

Speaker:     Dr. Roi Livni
                   Department of Electrical-Engineering Systems, Tel Aviv University

 

Monday, October 29th, 2018
15:00 - 16:00

Room 011, Kitot Bldg., Faculty of Engineering

 

Tackling Memorization in Generative Adversarial Machines

 

Abstract

 

Generative Adversarial Networks (GANs) is a recent algorithmic framework introduced by Goodfellow et al. '14.  In a nutshell, GANs algorithms receive as input a sample of examples, drawn from some unknown distribution, and in turn generate a synthetic distribution that resembles the true underlying. For example, consider an algorithm that receives as input some tunes from a specific music genre (e.g. jazz, rock, pop) and then outputs a new, original, tune from that genre. Such algorithms are theoretically challenging to even model, and the distinctions between algorithms that genuinely generate original new examples vs. algorithms that perform naive manipulations (or even merely memorization) of the input examples are, in fact, elusive and not well defined. 

 

We begin by studying a distribution learning model that is inspired by GANs, in which the learning algorithm has only indirect access to the training set. We then suggest the notion of differential privacy as a possible criterion for non-memorization (or originality) and we introduce the notion of DP-original learning. More specifically, we suggest that if the learning algorithm is restricted to be differentially private then it can not memorize; the intuition is that differential privacy implies that it is impossible to infer what samples the algorithm was trained with, even when given a full description of the generating distribution outputted by it. We then show how DP-originality can be obtained within our proposed distribution learning model (whenever a class is DP-original learnable).

 

 We also present an application in the context of differentially private PAC learning: we show that for any class, learnability by a private Empirical Risk Minimizer (ERM) is equivalent to the existence of a private sanitizer for. This can be seen as a private analog of the equivalence between uniform convergence and learnability in classical PAC learning.

 

Joint work with Olivier Bousquet and Shay Moran.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש שנעשה בתכנים אלה לדעתך מפר זכויות
שנעשה בתכנים אלה לדעתך מפר זכויות נא לפנות בהקדם לכתובת שכאן >>