EE Seminar: Generative Low-Shot Network Expansion

17 ביוני 2018, 15:30 
חדר 206, בניין וולפסון הנדסת מכונות 

 

Speaker: Adi Hayat

M.Sc. student under the supervision of Prof. Daniel Cohen-Or

 

Sunday, June 17th, 2018 at 15:30

Room 206, Wolfson Mechanical Bldg., Faculty of Engineering

 

Generative Low-Shot Network Expansion​

 

Conventional deep learning classifiers are static in the sense that they are trained on a predefined set of classes and learning to classify a novel class typically requires re-training. In our work, we address the problem of Low-Shot network-expansion learning. We introduce a learning framework which enables expanding a pre-trained (base) deep network to classify novel classes when the number of examples for the novel classes is particularly small. We present a simple yet powerful distillation method where the base network is augmented with additional weights to classify the novel classes, while keeping the weights of the base network unchanged. We term this learning hard distillation, since we preserve the response of the network  on the old classes to be equal in both the base and the expanded network. We show that since only a small number of weights needs to be trained, the hard distillation excels for low-shot training scenarios. Furthermore, hard distillation avoids detriment to classification performance on the base classes. Finally, we show that low-shot network expansion can be done with a very small memory footprint by using a compact generative model of the base classes training data with only a negligible degradation relative to learning with the full training set.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>