הנכם מוזמנים לסמינר של אור רימוך - סטודנט לתואר שני - שערוך ("מקצה-לקצה") פרמטרים של סינתיסייזר באמצעות למידה עמוקה

06 בינואר 2021, 14:00 - 15:00 
הסמינר יתקיים בזום 
0
הנכם מוזמנים לסמינר של אור רימוך - סטודנט לתואר שני - שערוך ("מקצה-לקצה") פרמטרים של סינתיסייזר באמצעות למידה עמוקה

~~

"ZOOM" SEMINAR
SCHOOL OF MECHANICAL ENGINEERING SEMINAR
Wednesday, January 6, 2021 at 14:00
InverSynth 2: Deep End-to-End Synthesizer Parameters Inference
Or Rimoch,
Mechanical Engineering department
Under the supervision of Dr. Noam Koengstein, Industrial Engineering department

 

Predicting Synthesizer parameters configuration, given it's raw audio or spectrogram is a challenging task to achieve. This subject has enormous implications for both the music industry and professional recordings studios. Current predictive methods are based on auto-regressive and generative models, such Autoencoders and GANS as the main ways to tackle this problem. In this work, we use Encoder-Decoder network architecture, termed InverSynth 2. Differ from previous work, we combined it with a unique multi-loss that has two loss terms: Cross Entropy (CE) and L2. The CE aim is to solve a classification task, by predicting the synthesizer parameters, as the output of the Encoder, and the L2 loss aim to solve a regression task, by predicting 2D spectrogram (STFT), as the output of the Decoder. Given a raw audio signal the network is able to predict the synthesizer configuration in an End-to-End fashion: The ground truth Spectrogram is fed into the Encoder to get the predicted parameters of the Synthesizer, then it fed as input to the Decoder, to get the predicted spectrogram. this way, the Decoder serves as a proxy for a "real" synthesizer, except that it outputs a spectrogram instead of raw audio. by using this technique, we can get richer representation feedback loop, while training the model
 In our results, we analyze, qualitatively and quantitatively, two synthesizer datasets: "synthetic" dataset - based on Python code that emulates a Synthesizer, and "real" dataset - based on real VST plugin called "TAL-NoiseMaker".
This work is still under ongoing research, and further improvements regarding network architecture and metrics might take place.

Join Zoom Meeting

https://zoom.us/j/96584758181?pwd=WC9PMXdsYzJ3NFdEN2Q5ZUtOZEVjdz09 The meeting will be recorded and made available on the School’s site.

 

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>