Audio-visual anticipatory coarticulation modeling by human and machine. Terry, L. H., Livescu, K., Pierrehumbert, J. B., & Katsaggelos, A. K. In Interspeech 2010, pages 2682–2685, ISCA, sep, 2010. ISCA.
Audio-visual anticipatory coarticulation modeling by human and machine [link]Paper  doi  abstract   bibtex   
The phenomenon of anticipatory coarticulation provides a basis for the observed asynchrony between the acoustic and visual onsets of phones in certain linguistic contexts. This type of asynchrony is typically not explicitly modeled in audio-visual speech models. In this work, we study within-word audiovisual asynchrony using manual labels of words in which theory suggests that audio-visual asynchrony should occur, and show that these hand labels confirm the theory. We then introduce a new statistical model of audio-visual speech, the asynchrony-dependent transition (ADT) model. This model allows asynchrony between audio and video states within word boundaries, where the audio and video state transitions depend not only on the state of that modality, but also on the instantaneous asynchrony. The ADT model outperforms a baseline synchronous model in mimicking the hand labels in a forced alignment task, and its behavior as parameters are changed conforms to our expectations about anticipatory coarticulation. The same model could be used for speech recognition, although here we consider it only for the task of forced alignment for linguistic analysis. © 2010 ISCA.
@inproceedings{Louis2010,
abstract = {The phenomenon of anticipatory coarticulation provides a basis for the observed asynchrony between the acoustic and visual onsets of phones in certain linguistic contexts. This type of asynchrony is typically not explicitly modeled in audio-visual speech models. In this work, we study within-word audiovisual asynchrony using manual labels of words in which theory suggests that audio-visual asynchrony should occur, and show that these hand labels confirm the theory. We then introduce a new statistical model of audio-visual speech, the asynchrony-dependent transition (ADT) model. This model allows asynchrony between audio and video states within word boundaries, where the audio and video state transitions depend not only on the state of that modality, but also on the instantaneous asynchrony. The ADT model outperforms a baseline synchronous model in mimicking the hand labels in a forced alignment task, and its behavior as parameters are changed conforms to our expectations about anticipatory coarticulation. The same model could be used for speech recognition, although here we consider it only for the task of forced alignment for linguistic analysis. {\textcopyright} 2010 ISCA.},
address = {ISCA},
author = {Terry, Louis H. and Livescu, Karen and Pierrehumbert, Janet B. and Katsaggelos, Aggelos K.},
booktitle = {Interspeech 2010},
doi = {10.21437/Interspeech.2010-711},
keywords = {Anticipatory coarticulation,Audio-visual asynchrony,Audio-visual speech recognition,Dynamic Bayesian networks},
month = {sep},
pages = {2682--2685},
publisher = {ISCA},
title = {{Audio-visual anticipatory coarticulation modeling by human and machine}},
url = {https://www.isca-speech.org/archive/interspeech_2010/terry10_interspeech.html},
year = {2010}
}

Downloads: 0