{"_id":"YX5GjuEkgAM3BsRnc","bibbaseid":"baby-vanhamme-hybridinputspacesforexemplarbasednoiserobustspeechrecognitionusingcoupleddictionaries-2015","authorIDs":[],"author_short":["Baby, D.","Van hamme, H."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["D."],"propositions":[],"lastnames":["Baby"],"suffixes":[]},{"firstnames":["H."],"propositions":[],"lastnames":["Van hamme"],"suffixes":[]}],"booktitle":"2015 23rd European Signal Processing Conference (EUSIPCO)","title":"Hybrid input spaces for exemplar-based noise robust speech recognition using coupled dictionaries","year":"2015","pages":"1676-1680","abstract":"Exemplar-based feature enhancement successfully exploits a wide temporal signal context. We extend this technique with hybrid input spaces that are chosen for a more effective separation of speech from background noise. This work investigates the use of two different hybrid input spaces which are formed by incorporating the full-resolution and modulation envelope spectral representations with the Mel features. A coupled output dictionary containing Mel exemplars, which are jointly extracted with the hybrid space exemplars, is used to reconstruct the enhanced Mel features for the ASR back-end. When compared to the system which uses Mel features only as input exemplars, these hybrid input spaces are found to yield improved word error rates on the AURORA-2 database especially with unseen noise cases.","keywords":"speech recognition;hybrid input spaces;exemplar-based noise robust speech recognition;coupled dictionaries;exemplar-based feature enhancement;temporal signal context;background noise;modulation envelope spectral representations;Mel features;Mel exemplars;ASR back-end;AURORA-2 database;word error rates;Dictionaries;Discrete Fourier transforms;Speech;Noise measurement;Feature extraction;Training data;Modulation;coupled dictionaries;automatic speech recognition;modulation envelope;non-negative matrix factorization","doi":"10.1109/EUSIPCO.2015.7362669","issn":"2076-1465","month":"Aug","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2015/papers/1570101195.pdf","bibtex":"@InProceedings{7362669,\n author = {D. Baby and H. {Van hamme}},\n booktitle = {2015 23rd European Signal Processing Conference (EUSIPCO)},\n title = {Hybrid input spaces for exemplar-based noise robust speech recognition using coupled dictionaries},\n year = {2015},\n pages = {1676-1680},\n abstract = {Exemplar-based feature enhancement successfully exploits a wide temporal signal context. We extend this technique with hybrid input spaces that are chosen for a more effective separation of speech from background noise. This work investigates the use of two different hybrid input spaces which are formed by incorporating the full-resolution and modulation envelope spectral representations with the Mel features. A coupled output dictionary containing Mel exemplars, which are jointly extracted with the hybrid space exemplars, is used to reconstruct the enhanced Mel features for the ASR back-end. When compared to the system which uses Mel features only as input exemplars, these hybrid input spaces are found to yield improved word error rates on the AURORA-2 database especially with unseen noise cases.},\n keywords = {speech recognition;hybrid input spaces;exemplar-based noise robust speech recognition;coupled dictionaries;exemplar-based feature enhancement;temporal signal context;background noise;modulation envelope spectral representations;Mel features;Mel exemplars;ASR back-end;AURORA-2 database;word error rates;Dictionaries;Discrete Fourier transforms;Speech;Noise measurement;Feature extraction;Training data;Modulation;coupled dictionaries;automatic speech recognition;modulation envelope;non-negative matrix factorization},\n doi = {10.1109/EUSIPCO.2015.7362669},\n issn = {2076-1465},\n month = {Aug},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2015/papers/1570101195.pdf},\n}\n\n","author_short":["Baby, D.","Van hamme, H."],"key":"7362669","id":"7362669","bibbaseid":"baby-vanhamme-hybridinputspacesforexemplarbasednoiserobustspeechrecognitionusingcoupleddictionaries-2015","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2015/papers/1570101195.pdf"},"keyword":["speech recognition;hybrid input spaces;exemplar-based noise robust speech recognition;coupled dictionaries;exemplar-based feature enhancement;temporal signal context;background noise;modulation envelope spectral representations;Mel features;Mel exemplars;ASR back-end;AURORA-2 database;word error rates;Dictionaries;Discrete Fourier transforms;Speech;Noise measurement;Feature extraction;Training data;Modulation;coupled dictionaries;automatic speech recognition;modulation envelope;non-negative matrix factorization"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2015url.bib","creationDate":"2021-02-13T17:31:52.467Z","downloads":0,"keywords":["speech recognition;hybrid input spaces;exemplar-based noise robust speech recognition;coupled dictionaries;exemplar-based feature enhancement;temporal signal context;background noise;modulation envelope spectral representations;mel features;mel exemplars;asr back-end;aurora-2 database;word error rates;dictionaries;discrete fourier transforms;speech;noise measurement;feature extraction;training data;modulation;coupled dictionaries;automatic speech recognition;modulation envelope;non-negative matrix factorization"],"search_terms":["hybrid","input","spaces","exemplar","based","noise","robust","speech","recognition","using","coupled","dictionaries","baby","van hamme"],"title":"Hybrid input spaces for exemplar-based noise robust speech recognition using coupled dictionaries","year":2015,"dataSources":["eov4vbT6mnAiTpKji","knrZsDjSNHWtA9WNT"]}