סמינר אלקטרוניקה פיזיקאלית: Shachar Ben Dayan

27 במרץ 2019, 9:00 
פקולטה להנדסה, ביניין כיתות חדר 011 
סמינר אלקטרוניקה פיזיקאלית:  Shachar Ben Dayan

שחר סמינר

You are invited to attend a lecture

Light Field Refocusing based on Sparse
Angular Information

 By:

Shachar Ben Dayan
M.Sc student under supervision of Prof. David Mendlovic and Dr. Raja Giryes
Abstract
Light-field photography has attracted significant academic attention in recent years due to its unique capability to extract depth without active components. While 2D cameras only capture the total amount of light at each pixel on the sensor, namely, the projection of the light in the scene, light-field cameras also record the direction and intensity of each ray intersecting with the sensor in a single capture. Thus, light field images contain spatial and angular information of a given scene.
This property of light-field images is the anchor for many useful applications, such as depth estimation, super-resolution and post-capture refocusing. The later application, at its current implementations, is based on a dense light-field, meaning high angular resolution. However, since in light-field cameras the same sensor records the spatial as well as the angular resolution, there is a trade-off between them, known as the angular-spatial trade-off. Therefore, it is of great interest to be able to perform the various light field applications relying only on sparse angular data.
In this work, we propose a technique for post-capture refocusing that uses a very low angular resolution: only 2x2 sub-aperture views. We exploit the power of convolutional neural networks (CNNs) for this task. The four input views are fed into a CNN, which outputs a refocused image according to a given focus plane. This algorithm shows improved performance in terms of PSNR, SSIM and computational time, compared to the current state-of-the-art that synthesizes from a small number of views a full light-field view in order to refocus one image. Our algorithm is applicable on clean data as well as on noisy data, and is not dependent of the capturing device of the light-field images.
 

On Wednesday, March 27, 2019 at 9:00
Room 011, Kitot Building
 

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>