EE Seminar: Decoding Linear Codes With Deep Learning

 

Speaker: Eliya Nachmani,

M.Sc. student under the supervision of Profs. David Burshtein and Yair Be'ery

 

Wednesday, April 26th, 2017 at 15:00

Room 011, Kitot Bldg., Faculty of Engineering

 

Decoding Linear Codes With Deep Learning

 

Abstract

 

Designing a practical, low complexity, close to optimal, channel decoder for powerful algebraic codes with short to moderate block length is an open research problem. A novel deep learning method for improving the belief propagation algorithm is proposed with feed-forward neural network. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of code-words. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

We also introduce a recurrent neural network architecture for decoding linear block codes. Our method shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters. We demonstrate improved performance over belief propagation on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the RNN decoder can be used to improve the performance or alternatively reduce the computational complexity of the mRRD algorithm for low complexity, close to optimal, decoding of short BCH codes.

 

26 באפריל 2017, 15:00 
חדר 011, בניין כיתות-חשמל 
אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>