Ieva Vebraite-Adereth - Ph.D. student under the supervision of Prof. Yael Hanein

סמינר המחלקה לאלקטרוניקה פיזיקלית

 

06 בפברואר 2024, 15:00 
011,Kitot Building  
Ieva Vebraite-Adereth  - Ph.D. student under the supervision of Prof. Yael Hanein

 

Dr. Gal Vardi - On Implicit Bias and Benign Overfitting in Neural Networks

סמינר מחלקת מערכות - EE Systems Seminar

15 בינואר 2024, 15:00 
hall, Electrical Engineering-Kitot Building‏ 011  
 Dr. Gal Vardi - On Implicit Bias and Benign Overfitting in Neural Networks

 

(The talk will be given in English)

Speaker:     Dr. Gal Vardi

TTI-Chicago and the Hebrew University

hall, Electrical Engineering-Kitot Building‏ 011

Monday, January 15th, 2024

15:00 - 16:00

On Implicit Bias and Benign Overfitting in Neural Networks

 

Abstract

When training large neural networks, there are typically many solutions that perfectly fit the training data. Nevertheless, gradient-based methods often have a tendency to reach those which generalize well, namely, perform well on test data, and understanding this “implicit bias” has been a subject of extensive research. Surprisingly, trained networks often generalize well even when perfectly fitting noisy training data (i.e., data with label noise), a phenomenon called “benign overfitting”. In the first part of the talk, I will discuss the implicit bias and its implications. I will show how the implicit bias can lead to good generalization performance, but also has negative implications in the context of susceptibility to adversarial examples and privacy attacks. In the second part of the talk, I will discuss benign overfitting and the settings in which it occurs in neural networks.

Short Bio

Gal is a postdoctoral researcher at TTI-Chicago and the Hebrew University, hosted by Nati Srebro and Amit Daniely as part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. Prior to that, he was a postdoc at the Weizmann Institute, hosted by Ohad Shamir, and a PhD student at the Hebrew University, advised by Orna Kupferman. His research focuses on theoretical machine learning, with an emphasis on deep-learning theory.

 

השתתפות בסמינר תיתן קרדיט שמיעה = עפ"י רישום שם מלא + מספר ת.ז. בטופס הנוכחות שיועבר באולם במהלך הסמינר

 

Dr. Shira Faigenbaum Golovin - The Power of Machine Learning: Theoretical and Practical Aspects

סמינר מחלקת מערכות - EE Systems Seminar

17 בינואר 2024, 15:00 
זום  
Dr. Shira Faigenbaum Golovin - The Power of Machine Learning: Theoretical and Practical Aspects

Join Zoom Meeting
https://tau-ac-il.zoom.us/j/89608190449?pwd=cGdES3AwNnlQbTBuRWcyQlkyUG50QT09
Meeting ID: 896 0819 0449
Passcode: 885703

(The talk will be given in English)

Speaker:     Dr. Shira Faigenbaum Golovin

Duke University

Wednesday, January 17th, 2024

15:00 - 16:00

The Power of Machine Learning: Theoretical and Practical Aspects

Abstract

Machine learning and deep learning have become indispensable tools in today's technological landscape, playing a pivotal role in revolutionizing various industries. The significance of these fields lies in their ability to compute similarities, learn patterns, and make decisions. In this talk, I will explore two key facets of these technologies. The first involves an analysis of the theoretical aspects of the success of deep neural networks. The second aspect will delve into the power of learning within data-driven applications.

In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions can efficiently be learned by the outputs of neural networks. By now, there exists a variety of results that show that a wide range of functions can be learned with sometimes surprising accuracy by these outputs. In this talk, we add to the latter class of rough functions by showing that it also includes multiscale functions. Multiscale functions, which are the solutions of refinement equations, are the building stones in many constructions; including subdivision schemes used in computer graphics, wavelets, as well as several fractals (some can represent parts in natural images). We prove that all multiscale functions can be implemented, up to arbitrarily high precision, by ReLu-based Neural Networks.

In the second part of my talk, I will highlight the potential that lies in learning from data which is acquired in data-driven applications. I will showcase the power of machine learning to acquire multispectral images of ancient inscriptions ca. 600 BCE, while improving the legibility of the text. Followed by developing image processing tools for segmentation and comparison of the handwriting found on these documents.  

Short Bio

Dr. Shira Faigenbaum-Golovin is an Assistant Research Professor at Duke University, working with Prof. Ingrid Daubechies. In 2021 Shira obtained her Ph.D. in Applied Mathematics from Tel Aviv University. While pursuing her Ph.D. Shira contributed to the development of the Image Signal Processor at Intel for approximately eight years. Shira’s research interests are in the areas of data science, image processing, and machine learning.

 

השתתפות בסמינר תיתן קרדיט שמיעה = עפ"י רישום בצ'ט של שם מלא + מספר ת.ז.

 

 

Dr. Joachim Neu - Internet-Scale Consensus in the Blockchain Era

סמינר מחלקת מערכות - EE Systems Seminar

10 בינואר 2024, 15:00 
hall, Electrical Engineering-Kitot Building‏ 011  
Dr. Joachim Neu - Internet-Scale Consensus in the Blockchain Era

The talk will be given in English)

 

Speaker:     Dr. Joachim Neu

Stanford University

 hall, Electrical Engineering-Kitot Building‏ 011

Wednesday, January 10th, 2024

15:00 - 16:00

Internet-Scale Consensus in the Blockchain Era

 

Abstract

Blockchains have ignited interest in Internet-scale consensus as a fundamental building block for decentralized applications and services, which promise egalitarian access and robustness to faults and abuse. While the study of consensus has a 40+ year tradition, the new Internet-scale setting requires a fundamental rethinking of models, desiderata, and protocols.

A key challenge is to simultaneously serve users with different security requirements. I will focus on two examples from my work: (1) Some users want uninterrupted service-availability even as protocol participants unexpectedly come and go, while other users demand that perpetrators can be held accountable in case of service-consistency breaches. We show that no traditional protocol can satisfy both simultaneously. To resolve this dilemma, we develop the ebb-and-flow consensus paradigm, which has been adopted in Ethereum (the largest freely-programmable blockchain). (2) Some users wish to harden consistency even at the expense of availability, while others prefer to stick to traditional consensus protocols' equal resilience to consistency and availability attacks. We present the first protocols that allow optimal availability-consistency tradeoff for every user simultaneously.

Short Bio

Joachim Neu is a PhD candidate at Stanford University advised by David Tse. His research focuses on the science and engineering of Internet-scale consensus as a fundamental building block for decentralized systems, using tools from distributed systems, probabilistic systems analysis, networking and communications, and applied cryptography. While a Masters student at Technical University of Munich, he worked in information and coding theory. Joachim has received the Protocol Labs PhD Fellowship and the Stanford Graduate Fellowship.

 

השתתפות בסמינר תיתן קרדיט שמיעה = עפ"י רישום שם מלא + מספר ת.ז. בטופס הנוכחות שיועבר באולם במהלך הסמינר

 

 

 

 

Ori Zitzer - Multi-Modal Multi-Objective Evolutionary Optimization with Solutions of Variable Length

סמינר מחלקת מערכות - EE Systems Seminar

10 בינואר 2024, 14:00 
זום  
 Ori Zitzer - Multi-Modal Multi-Objective Evolutionary Optimization with Solutions of Variable Length

Electrical Engineering Systems Zoom Seminar

 

Join Zoom Meeting

https://zoom.us/j/94236001944?pwd=RmI1TWZlWXo5cHdVd2tSZnRWRWdTdz09
Meeting ID: 942 3600 1944
Passcode: 6kzM4R

 

Speaker: Ori Zitzer

M.Sc. student under the supervision of Dr. Amiram Moshaiov

Wednesday, 10th January 2024, at 14:00

Multi-Modal Multi-Objective Evolutionary Optimization with Solutions of Variable Length

Abstract

Multi-modal optimization aims to provide decision-makers with alternative solutions, possibly near optimal, and not just one optimal solution. In the past few years there is a need to solve a special kind of multi-modal multi objective optimization problems (MMOPs) in which solutions belong to decision-spaces of various dimensions (i.e., solutions of variable length). We propose a new evolutionary algorithm to solve such problems and save computing resources in compare to the existing algorithms.

 

השתתפות בסמינר תיתן קרדיט שמיעה = עפ"י רישום בצ'אט של שם מלא + מספר ת.ז.

 

 

ASIC Digital IP Infrastructure student

Qualifications

  • 3rd year Computer/ Electrical Engineering student.
  • Knowledge of Unix env, csh/sh/bash, Python, perl - advantage.

סמינר מחלקה של איתי גריניאסטי - פונקציונליות מורכבת מתהווה במכונות מיקרוסקופיות ובמודלים חישוביים

15 בינואר 2024, 14:00 - 15:00 
פקולטה להנדסה  
0
סמינר מחלקה של איתי גריניאסטי - פונקציונליות מורכבת מתהווה במכונות מיקרוסקופיות ובמודלים חישוביים

 

SCHOOL OF MECHANICAL ENGINEERING SEMINAR
Monday January 15.1.2024 at 14:00

Wolfson Building of Mechanical Engineering, Room 206

 

Emergent complex functionality in microscopic machines and computational models

Itay Griniasty

Itay Griniasty is a Schmidt AI in science postdoc fellow at Cornell university

 

Systems composed of many interacting elements that collaboratively generate a function, such as meta-material robots, proteins, and neural networks are notoriously difficult to design.

Such systems elude traditional explicit design methodologies, which rely on composing individual components with specific subfunctions, such as cogs, springs and shafts, to achieve complex functionality. In part the problem stems from the fact that there are few principled approaches to the design of emergent functionality.  In this talk I will describe progress towards creating such paradigms for two canonical systems: I will first describe how bifurcations of the system dynamics can be used as an organizing principle for the design of functionality in protein like machines with magnetic interactions. I will then introduce a computational microscope that we have developed to analyze emergent functionality, and its application to machine learning. There we uncovered compelling evidence that the training of neural networks is inherently low dimensional, suggesting new paradigms for their design.

References

1. T. Yang et al. Bifurcation instructed design of multistate machines. Proceedings of the National Academy of Sciences, 120(34):e2300081120, 2023

2. J. Mao et al. The training process of many deep networks explores the same low-dimensional manifold. arXiv preprint arXiv:2305.01604, 2023.

3. R. Ramesh, et al. A picture of the space of typical learnable tasks. Proc. of International Conference of Machine Learning (ICML), 2023.

A diagram of a geodesic system

Description automatically generated

 

 

Short bio

Itay Griniasty is a Schmidt AI in science postdoc fellow at Cornell university, studying the design of microscopic and soft machines, non newtonian fluids and computational tools to analyze deep neural networks and multiphysics simulations.

 

Itay was trained as a mechanical designer in the technological unit in the IDF intelligence corps.

He studied mathematics and physics at the Hebrew university for his BSc, where his minor thesis led to a long collaboration on developing novel mathematical tools for the integration of non linear partial differential equations. He went on to a PhD in physics at the Weizmann institute, studying the propagation of waves in inhomogeneous media.

 

Itay has been awarded an Amirim merit scholarship for his BSc, an Azrieli excellence scholarhship for his PhD,  the Chaim Mida Prize for an excellent PhD student, a Fulbright postdoctoral fellowship (which he declined) and a Schmidt AI in science fellowship towards his postdoc

 

Dr. Tomer Galanti (MIT) - Fundamental Problems in AI: Transferability, Compressibility and Generalization

סמינר מחלקת מערכות - EE Systems Seminar  

08 בינואר 2024, 15:00 
זום  
  Dr. Tomer Galanti (MIT) - Fundamental Problems in AI: Transferability, Compressibility and Generalization

(The talk will be given in English)

Speaker:     Dr. Tomer Galanti

Postdoctoral Associate at the Center for Brains, Minds, and Machines at MIT

011 hall, Electrical Engineering-Kitot Building

 

Monday, January 8, 2024

15:00 - 16:00

 

Fundamental Problems in AI: Transferability, Compressibility and Generalization

 

Abstract

 

In this talk, we delve into several fundamental questions in deep learning. We start by addressing the question, "What are good representations of data?" Recent studies have shown that the representations learned by a single classifier over multiple classes can be easily adapted to new classes with very few samples. We offer a compelling explanation for this behavior by drawing a relationship between transferability and an emergent property known as neural collapse. Additionally, we explore why certain architectures, such as convolutional networks, outperform fully-connected networks, providing theoretical support for how their inherent sparsity aids learning with fewer samples. Lastly, I present recent findings on how training hyperparameters implicitly control the ranks of weight matrices, consequently affecting the model's compressibility and the dimensionality of the learned features.
 
Additionally, I will describe how this research integrates into a broader research program where I aim to develop realistic models of contemporary learning settings to guide practices in deep learning and artificial intelligence. Utilizing both theory and experiments, I study fundamental questions in the field of deep learning, including why certain architectural choices improve performance or convergence rates, when transfer learning and self-supervised learning work, and what kinds of data representations are learned with Stochastic Gradient Descent.
 
Short Bio
 
Tomer Galanti is a Postdoctoral Associate at the Center for Brains, Minds, and Machines at MIT, where he focuses on the theoretical and algorithmic aspects of deep learning. He received his Ph.D. in Computer Science from Tel Aviv University and served as a Research Scientist Intern at Google DeepMind's Foundations team during his doctoral studies. He has published numerous papers in top-tier conferences and journals, including NeurIPS, ICML, ICLR, and JMLR. His work, titled "On the Modularity of Hypernetworks," was awarded an oral presentation at NeurIPS 2020.
 
Zoom Link:
 

 

עמודים

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש שנעשה בתכנים אלה לדעתך מפר זכויות
שנעשה בתכנים אלה לדעתך מפר זכויות נא לפנות בהקדם לכתובת שכאן >>