EE Seminar: Robustness in Deep Learning

15 במאי 2019, 15:00 
חדר 011, בניין כיתות-חשמל 

Speaker: Daniel Jakubovitz

M.Sc. student under the supervision of Dr. Raja Giryes

 

Wednesday, May 15th, 2019 at 15:00

Room 011, Kitot Bldg., Faculty of Engineering

Robustness in Deep Learning

 

Abstract

Even though deep neural networks show tremendous performance in a variety of tasks, they are known to have robustness vulnerabilities as they are highly sensitive to changes in the training and test data. Our work examines two aspects of the robustness of deep neural networks and proposes practical techniques to mitigate the corresponding network vulnerabilities.

The first is the vulnerability of deep learning models to adversarial attacks. Deep learning models can easily be fooled by small perturbations applied to their input. Our work suggests a novel method for promoting robustness to adversarial attacks in deep neural networks using Jacobian regularization. This method is shown to provide a substantial improvement to DNN robustness against adversarial attacks while at the same time maintaining good performance on the original task.

The second robustness aspect examined in this work is related to the problem of semi-supervised transfer learning, which is addressed using an information theoretic approach. The cross-entropy loss, which is predominantly used in classification tasks, is decomposed into several information theoretic terms. This leads to a regularization technique called Lautum regularization, which is shown to provide a considerable improvement in performance on the target test set.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>