Representation Learning for Single-Channel Source Separation and Bandwidth Extension

Matthias Zöhrer, Franz Pernkopf, Robert Peharz

Publikation: Beitrag in einer FachzeitschriftArtikelBegutachtung


In this paper, we use deep representation learning for model-based single-channel source separation (SCSS) and artificial bandwidth extension (ABE). Both tasks are ill-posed and source-specific prior knowledge is required. In addition to well-known generative models such as restricted Boltzmann machines and higher order contractive autoencoders two recently introduced deep models, namely generative stochastic networks (GSNs) and sum-product networks (SPNs), are used for learning spectrogram representations. For SCSS we evaluate the deep architectures on data of the 2 nd CHiME speech separation challenge and provide results for a speaker dependent, a speaker independent, a matched noise condition and an unmatched noise condition task. GSNs obtain the best PESQ and overall perceptual score on average in all four tasks. Similarly, frame-wise GSNs are able to reconstruct the missing frequency bands in ABE best, measured in frequency-domain segmental SNR. They outperform SPNs embedded in hidden Markov models and the other representation models significantly.
Seiten (von - bis)2398-2409
FachzeitschriftIEEE Transactions on Audio Speech and Language Processing
PublikationsstatusVeröffentlicht - 2015

Fields of Expertise

  • Information, Communication & Computing


Untersuchen Sie die Forschungsthemen von „Representation Learning for Single-Channel Source Separation and Bandwidth Extension“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren