Discriminative analysis of lip motion features for speaker identification and speech-reading. Cetinguel, H. E., Yemez, Y., Erzin, E., & Tekalp, A. M. IEEE TRANSACTIONS ON IMAGE PROCESSING, 15(10):2879-2891, OCT, 2006.
doi  abstract   bibtex   
There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.
@article{ ISI:000240776200003,
Author = {Cetinguel, H. Ertan and Yemez, Yuecel and Erzin, Engin and Tekalp, A.
   Murat},
Title = {{Discriminative analysis of lip motion features for speaker
   identification and speech-reading}},
Journal = {{IEEE TRANSACTIONS ON IMAGE PROCESSING}},
Year = {{2006}},
Volume = {{15}},
Number = {{10}},
Pages = {{2879-2891}},
Month = {{OCT}},
Abstract = {{There have been several studies that jointly use audio, lip intensity,
   and lip geometry information for speaker identification and
   speech-reading applications. This paper proposes using explicit lip
   motion information, instead of or in addition to lip intensity and/or
   geometry information, for speaker identification and speech-reading
   within a unified feature selection and discrimination analysis
   framework, and addresses two important issues: 1) Is using explicit lip
   motion information useful, and, 2) if so, what are the best lip motion
   features for these two applications? The best lip motion features for
   speaker identification are considered to be those that result in the
   highest discrimination of individual speakers in a population, whereas
   for speech-reading, the best features are those providing the highest
   phoneme/word/phrase recognition rate. Several lip motion feature
   candidates have been considered including dense motion features within a
   bounding box about the lip, lip contour motion features, and combination
   of these with lip shape features. Furthermore, a novel two-stage,
   spatial, and temporal discrimination analysis is introduced to select
   the best lip motion features for speaker identification and
   speech-reading applications. Experimental results using an
   hidden-Markov-model-based recognition system indicate that using
   explicit lip motion information provides additional performance gains in
   both applications, and lip motion features prove more valuable in the
   case of speech-reading application.}},
DOI = {{10.1109/TIP.2006.877528}},
ISSN = {{1057-7149}},
EISSN = {{1941-0042}},
ResearcherID-Numbers = {{Erzin, Engin/H-1716-2011}},
ORCID-Numbers = {{Erzin, Engin/0000-0002-2715-2368}},
Unique-ID = {{ISI:000240776200003}},
}
Downloads: 0