Lip-Reading with Limited-Data Network. Fernandez-Lopez, A. & Sukno, F. M. In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep., 2019.
Lip-Reading with Limited-Data Network [pdf]Paper  doi  abstract   bibtex   
The development of Automatic Lip-Reading (ALR) systems is currently dominated by Deep Learning (DL) approaches. However, DL systems generally face two main issues related to the amount of data and the complexity of the model. To find a balance between the amount of available training data and the number of parameters of the model, in this work we introduce an end-to-end ALR system that combines CNNs and LSTMs and can be trained without large-scale databases. To this end, we propose to split the training by modules, by automatically generating weak labels per frames, termed visual units. These weak visual units are representative enough to guide the CNN to extract meaningful features that when combined with the context provided by the temporal module, are sufficiently informative to train an ALR system in a very short time and with no need for manual labeling. The system is evaluated in the well-known OuluVS2 database to perform sentence-level classification. We obtain an accuracy of 91.38% which is comparable to state-of the-art results but, differently from most previous approaches, we do not require the use of external training data.

Downloads: 0