Cauchy Multichannel Speech Enhancement with a Deep Speech Prior. Fontaine, M., Nugraha, A. A., Badeau, R., Yoshii, K., & Liutkus, A. In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep., 2019.
Cauchy Multichannel Speech Enhancement with a Deep Speech Prior [pdf]Paper  doi  abstract   bibtex   
We propose a semi-supervised multichannel speech enhancement system based on a probabilistic model which assumes that both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution. As we advocate, this allows handling strong and adverse noisy conditions. Consequently, the model is parameterized by the source magnitude spectrograms and the source spatial scatter matrices. To deal with the non-additivity of scatter matrices, our first contribution is to perform the enhancement on a projected space. Then, our second contribution is to combine a latent variable model for speech, which is trained by following the variational autoencoder framework, with a low-rank model for the noise source. At test time, an iterative inference algorithm is applied, which produces estimated parameters to use for separation. The speech latent variables are estimated first from the noisy speech and then updated by a gradient descent method, while a majoriation-equalization strategy is used to update both the noise and the spatial parameters of both sources. Our experimental results show that the Cauchy model outperforms the state-of-art methods. The standard deviation scores also reveal that the proposed method is more robust against non-stationary noise.

Downloads: 0