Multimodal speaker/speech recognition using lip motion, lip texture and audio. Cetingul, H. E., Erzin, E., Yemez, Y., & Tekalp, A. M. SIGNAL PROCESSING, 86(12):3549-3558, DEC, 2006.
doi  abstract   bibtex   
We present a new multimodal speaker/speech recognition system that integrates audio, lip texture and lip motion modalities. Fusion of audio and face texture modalities has been investigated in the literature before. The emphasis of this work is to investigate the benefits of inclusion of lip motion modality for two distinct cases: speaker and speech recognition. The audio modality is represented by the well-known mel-frequency cepstral coefficients (MFCC) along with the first and second derivatives, whereas lip texture modality is represented by the 2D-DCT coefficients of the luminance component within a bounding box about the lip region. In this paper, we employ a new lip motion modality representation based on discriminative analysis of the dense motion vectors within the same bounding box for speaker/speech recognition. The fusion of audio, lip texture and lip motion modalities is performed by the so-called reliability weighted summation (RWS) decision rule. Experimental results show that inclusion of lip motion modality provides further performance gains over those which are obtained by fusion of audio and lip texture alone, in both speaker identification and isolated word recognition scenarios. (c) 2006 Published by Elsevier B.V.
@article{ ISI:000242182700004,
Author = {Cetingul, H. E. and Erzin, E. and Yemez, Y. and Tekalp, A. M.},
Title = {{Multimodal speaker/speech recognition using lip motion, lip texture and
   audio}},
Journal = {{SIGNAL PROCESSING}},
Year = {{2006}},
Volume = {{86}},
Number = {{12}},
Pages = {{3549-3558}},
Month = {{DEC}},
Abstract = {{We present a new multimodal speaker/speech recognition system that
   integrates audio, lip texture and lip motion modalities. Fusion of audio
   and face texture modalities has been investigated in the literature
   before. The emphasis of this work is to investigate the benefits of
   inclusion of lip motion modality for two distinct cases: speaker and
   speech recognition. The audio modality is represented by the well-known
   mel-frequency cepstral coefficients (MFCC) along with the first and
   second derivatives, whereas lip texture modality is represented by the
   2D-DCT coefficients of the luminance component within a bounding box
   about the lip region. In this paper, we employ a new lip motion modality
   representation based on discriminative analysis of the dense motion
   vectors within the same bounding box for speaker/speech recognition. The
   fusion of audio, lip texture and lip motion modalities is performed by
   the so-called reliability weighted summation (RWS) decision rule.
   Experimental results show that inclusion of lip motion modality provides
   further performance gains over those which are obtained by fusion of
   audio and lip texture alone, in both speaker identification and isolated
   word recognition scenarios. (c) 2006 Published by Elsevier B.V.}},
DOI = {{10.1016/j.sigpro.2006.02.045}},
ISSN = {{0165-1684}},
EISSN = {{1879-2677}},
ResearcherID-Numbers = {{Erzin, Engin/H-1716-2011}},
ORCID-Numbers = {{Erzin, Engin/0000-0002-2715-2368}},
Unique-ID = {{ISI:000242182700004}},
}

Downloads: 0