EE ZOOM Seminars- Nir Raviv & Yoav Blum
https://zoom.us/j/155811314 = Seminar at 15:00
https://us04web.zoom.us/j/108282014 = Seminar at 15:30
השתתפות בשני הסמינרים תיתן קרדיט – עפ"י צילום המסך עליו יופיעו שמות המשתתפים בכל סמינר
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Speaker: Nir Raviv
M.Sc. student under the supervision of Prof. Yair Be'ery
Monday, April 06, 2020 at 15:00
ZOOM
Active Learning and Self-Attention for Decoding of Error Correcting Codes
Abstract
Error correction codes are an integral part of communication applications, boosting the reliability of transmission. The optimal decoding of transmitted codewords is the maximum likelihood rule, which is NP-hard due to the curse of dimensionality. For practical realizations, suboptimal decoding algorithms are employed; yet limited theoretical insights prevent one from exploiting the full potential of these algorithms.
Inspired by state-of-the-art deep learning advancements, two novel methods are introduced to improve error correction codes decoding algorithms.
The first method explores the informativeness of the training data. High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, a method to improve Weighted Belief Propagation (WBP) decoding is introduced. By smartly sampling the data, we introduce an improved performance without increasing inference (decoding) complexity over the original WBP.
The second aspect we chose to explore is the choice of permutation in permutation decoding. We present a data-driven framework for permutation selection, combining domain knowledge with machine learning concepts such as node embedding and self-attention. Significant and consistent improvements in the bit error rate are introduced for all simulated codes, over the baseline decoders.
These two methods are examples for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. To the best of our knowledge, this work is the first to leverage the benefits of both active learning and neural Transformer networks in the physical layer of communication systems.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Speaker: Yoav Blum
M.Sc. student under the supervision of Prof. David Burshtien
Monday, April 06, 2020 at 15:30
ZOOM
Blind Vocoder Speech Reconstruction using Generative Adversarial Networks.
Abstract
The problem of reconstructing vocoder acoustic parameters using only encoded bit
stream data is considered. Wasserstein generative adversarial networks (GANs) and
CycleGANs, that map two unpaired domains, are used. It is shown that it is possible
to reconstruct key acoustic parameters such as linear predictive coefficients (LPCs)
when these parameters are encoded using scalar quantization. It is further shown that
speech reconstruction is possible to some extent when it is known that the vocoder
belongs to the family of code excited linear prediction (CELP) models, but the coded
bit frame structure is unknown.