EE Seminar: Learning to see by listening

~~ (The talk will be given in English)

Speaker:   Prof. William T. Freeman
                        Massachusetts Institute of Technology and Google

Monday, May 23rd, 2016
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering

Learning to see by listening

Abstract
Children may learn about the world by pushing, banging, and manipulating things, watching and listening as materials make their distinctive sounds-- dirt makes a thud; ceramic makes a clink. These sounds reveal physical properties of the objects, as well as the force and motion of the physical interaction. We've explored a toy version of such learning-through-interaction by recording audio and video while we hit many things with a drumstick.
We developed an algorithm the predict sounds from silent videos of the drumstick interactions. The algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We demonstrate that the sounds generated by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that the task of predicting sounds allows our system to learn to visually distinguish different materials.
Joint work with: Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H. Adelson http://arxiv.org/abs/1512.08512 to appear in CVPR 2016

Bio: 
William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He was the Associate Department Head from 2011 - 2014. His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990 and 1995. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops. He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

 

23 במאי 2016, 15:00 
חדר 011, בניין כיתות-חשמל  

EE Seminar: Shape Matching and Mapping using Semidefinite Programming

~~(The talk will be given in English)

Speaker:   Shahar Kovalsky
                      Department of Computer Science and Applied Mathematics, Weizmann Institute

Monday, May 9th, 2016
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering

Shape Matching and Mapping using Semidefinite Programming

Abstract
Geometric problems - such as finding corresponding points over a collection of shapes, or computing shape deformation under geometric constraints - pose various computational challenges. I will show that despite the very different nature of these two highly non-convex problems, Semidefinite Programming (SDP) can be leveraged to provide a tight convex approximation in both cases. A different approach is used for each problem, demonstrating the versatility of SDP:
(i) For establishing point correspondences between shapes, we devise an SDP relaxation. I will show it is a hybrid of the popular spectral and doubly-stochastic relaxations, and is in fact tighter than both.
(ii) For the computation of piecewise-linear mappings, we introduce a family of maximal SDP restrictions. Solving a sequence of such SDPs enables the optimization of functionals and constraints expressed in terms of singular values, which naturally model various geometry processing problems.

Bio: 
Shahar Kovalsky is a PhD student in the department of Computer Science and Applied Math at the Weizmann Institute of Science, Israel. His main research interests are in numerical optimization, computer graphics and vision, and in particular, applications of convex optimization to geometry processing. Shahar holds a B.Sc. in Mathematics and B.Sc. and M.Sc. in Electrical Engineering from Ben-Gurion University.

 

09 במאי 2016, 15:00 
חדר 011, בניין כיתות-חשמל  

EE Seminar: Visual Perception through Hyper Graphs

~~(The talk will be given in English)

Speaker:   Prof. Nikos Paragios
                       CentraleSupelec, Inria, University of Paris-Saclay,  http://cvn.ecp.fr/personnel/nikos/

Wednesday, May 4th, 2016
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering

Visual Perception through Hyper Graphs

Abstract
Computational vision, visual computing and biomedical image analysis have made tremendous progress of the past decade. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of visual perception tasks.
Hyper-Graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this talk, we briefly introduce the interest of such representations, discuss their strength and limitations, provide appropriate strategies for their inference learning and present their application to address a variety of problems of visual computing.

Bio:  Nikos Paragios is professor of Applied Mathematics and Computer Science and director of the Center for Visual Computing of CentraleSupelec. Prior to that he was professor/research scientist (2004-2005, 2011-2013)at the Ecole Nationale de Ponts et Chaussees, affiliated with Siemens Corporate Research (Princeton, NJ, 1999-2004) as a project manager, senior research scientist and research scientist. In 2002 he was an adjunct professor at Rutgers University and in 2004 at New York University. N. Paragios was a visiting professor at Yale (2007) and at University of Houston (2009). Professor Paragios is an IEEE Fellow, has co-edited four books, published more than two hundred papers in the most prestigious journals and conferences of medical imaging and computer vision (DBLP server), and holds twenty one US patents. His work has approx 15,750 citations according to Google Scholar and his H-number (03/2016) is 60. He is the Editor in Chief of the Computer Vision and Image Understanding Journal and serves as a member of the editorial board for the Medical Image Analysis Journal (MedIA) and the SIAM Journal in Imaging Sciences (SIIMS). He as served as an associate/area editor/member of the editorial board for the IEEE Transactions on
Pattern Analysis and Machine Intelligence (PAMI), the Computer Vision and Image Understanding Journal (CVIU), the International Journal of Computer Vision (IJCV) and the Journal of Mathematical Imaging and Vision (JMIV) while he was one of the program chairs of the 11th European Conference in Computer Vision (ECCV'10, Heraklion, Crete) and serves regularly at the conference boards of the most prestigious events of his fields (ICCV, CVPR, ECCV, MICCAI). Professor Paragios is member of the scientific council of SAFRAN conglomerate.

 

04 במאי 2016, 15:00 
חדר 011, בניין כיתות-חשמל  

טקס רוזנברג

10 באפריל 2016, 14:00 
 

Departmental Seminar Material Sciences and Engineering

Development of routine for solution of alluminide`s structure basing on electron diffraction data

Prof. Louisa Meshi

Department of Materials Engineering and Ilse Katz Institute for Nanosized Science

and Technology, Ben Gurion University of the Negev, Beer-Sheva

13 באפריל 2016, 16:00 
Room 103, Engineering Class (Kitot) Building  
Departmental Seminar Material Sciences and Engineering

סמינר מחלקתי Luigi Cavaleri

13 באפריל 2016, 15:00 
וולפסון 206  
0
סמינר מחלקתי Luigi Cavaleri

 

EE Seminar: Subspace polynomials, cyclic subspace codes, and list-decoding of Gabidulin codes

~~(The talk will be given in English)

Speaker:  Netanel Raviv
                        Computer Science Department, Technion

Monday, April 18th, 2016
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering

Subspace polynomials, cyclic subspace codes, and list-decoding of Gabidulin codes

Abstract
Subspace codes have received an increasing interest recently due to their application in error correction for random network coding. In particular, cyclic subspace codes are possible candidates for large codes with efficient encoding and decoding algorithms. We introduce a new way of representing subspace codes by a class of polynomials called subspace polynomials. We present some constructions of such codes which are cyclic and analyze their parameters.

In addition, the subspace polynomials from one of these constructions is used to show the limits of list decoding of Gabidulin codes, which may be seen as the rank-metric equivalent of Reed-Solomon codes. Our results show that unlike Reed-Solomon codes, there exists certain Gabidulin codes that cannot be list decoded efficiently beyond the unique decoding radius.

Bio: Netanel Raviv received a B.Sc. from the department of mathematics and an M.Sc. from the department of Computer Science at the Technion at 2010 and 2013, respectively. He is now a Doctoral student at the department of Computer Science at the Technion. He is an awardee of the IBM Ph.D. fellowship for the academic year of 2015-2016, and the Aharon and Ephraim Katzir study grant for 2015. His research interests include coding for distributed storage systems, algebraic coding theory, network coding, and algebraic structures.

 

18 באפריל 2016, 15:00 
חדר 011, בניין כיתות-חשמל  

EE Seminar: On the Stability of Deep Networks and its Relationship with Compressed Sensing and Metric Learning

~~
Speaker:   Dr. Raja Giryes
                        EE, Tel Aviv University

Monday, April 11th, 2016
15:00 - 16:00
Room 011, Kitot Bldg., Faculty of Engineering

On the Stability of Deep Networks and its Relationship with Compressed Sensing and Metric Learning

Abstract
This lecture will address the fundamental question: What are deep neural networks doing to metrics in the data? We know that two important properties of a classification machinery are: (i) the system preserves the important information of the input data; (ii) the training examples convey information for unseen data; and (iii) the system is able to treat differently points from different classes. We show that these fundamental properties are inherited by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Similar points at the input of the network are likely to have the same output. The theoretical analysis of deep networks presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure; and provide bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with state-of-the-art trained networks.

Bio: 
Raja Giryes is a faculty member in the school of electrical engineering at Tel Aviv University. His research interests lie at the intersection between signal and image processing and machine learning, and in particular, in deep learning, inverse problems, sparse representations, and signal and image modeling. More details in web.eng.tau.ac.il/~raja

 

11 באפריל 2016, 15:00 
חדר 011, בניין כיתות-חשמל  

סמינר מחלקתי Prof. Imberger

18 במאי 2016, 15:00 
וולפסון 206  
0
סמינר מחלקתי Prof. Imberger

 

EE Seminar: Non-smooth manifold optimization with applications to machine learning and pattern recognition

~~(The talk will be given in English)

Speaker:   Prof. Michael Bronstein
                        University of Lugano, Switzerland / Perceptual Computing, Intel, Israel RAS, Moscow, Russia

Sunday, April 3rd, 2016
14:00 - 15:00
Room 011, Kitot Bldg., Faculty of Engineering

Non-smooth manifold optimization with applications to machine learning and pattern recognition

Abstract
Numerous problems in machine learning are formulated as optimization with manifold constraints, i.e., where the variables are restricted to a smooth submanifold of the search space. For example, optimization on the Grassman manifold comes up in multi-view clustering and matrix completion; Stiefel manifolds arise in eigenvalue-, assignment-, and Procrustes problems, compressed sensing, shape correspondence, manifold learning, sensor localization, structural biology, and structure from motion recovery; manifolds of fixed-rank matrices appear in maxcut problems and sparse principal component analysis; and oblique manifolds are encountered in problems such as joint diagonalization and blind source separation.
In this talk, I will present an ADMM-like method allowing to handle non-smooth manifold-constrained optimization. Our method is generic and not limited to a specific manifold, is very simple to implement, and does not require parameter tuning. I will show examples of applications from the domains of physics, computer graphics, and machine learning.

03 באפריל 2016, 14:00 
חדר 011, בניין כיתות-חשמל  

עמודים

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש שנעשה בתכנים אלה לדעתך מפר זכויות
שנעשה בתכנים אלה לדעתך מפר זכויות נא לפנות בהקדם לכתובת שכאן >>