Adaptive facial expression recognition using inter-modal top-down context. Sarvadevabhatla, Kiran, R., Benovoy, M., Musallam, S., & Ng-Thow-Hing, V. In Proceedings of the 13th international conference on multimodal interfaces, of ICMI '11, pages 27--34, New York, NY, USA, 2011. ACM.
Adaptive facial expression recognition using inter-modal top-down context [pdf]Paper  doi  abstract   bibtex   10 downloads  
The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.

Downloads: 10