EE Seminar: Stacking Neural Networks with Predictive Normalized Maximum Likelihood

30 באוקטובר 2019, 15:30 
חדר 011 בניין כיתות חשמל 

 

Speaker: Ido Lublinsky

M.Sc. student under the supervision of Prof. Meir Feder

 

Wednesday, October 30th 2019 at 15:30

Room 011, Kitot Bldg., Faculty of Engineering

 

Stacking Neural Networks with Predictive Normalized Maximum Likelihood

 

Abstract

 

We consider an approach, called Stacked Generalization, for solving the problem of ensemble learning. This approach, introduced by Wolpert in ’92, suggests learning an ensemble function by creating a dataset that includes the outputs from all the learners we wish to combine as features along with the original corresponding labels. We incorporate a few variants of Stacked Generalization that provide superior performance in the logarithmic loss function sense and also in the computational complexity sense. We also modify the common classification scheme that is usually measured by the zero-one loss function to a scheme that can be evaluated by the logarithmic loss function, also known as log-loss. We then compare and examine an alternative scheme for classification that was recently suggested for universal learning of individual data called Predictive Normalized Maximum Likelihood (pNML). The pNML scheme competes with a genie, a learner that has access to the training and test data. However, it is restricted to some given hypotheses class and does not know which data is the test data. The pNML solution is minimax optimal in the log-loss sense. We further examine the use of the regret of the pNML solution as a confidence or learnability measure.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>