Adaptive facial expression recognition using inter-modal top-down context. Sarvadevabhatla, R. K., Benovoy, M., Musallam, S., & Ng-Thow-Hing, V. In Proceedings of the 13th international conference on multimodal interfaces, of ICMI '11, pages 27–34, New York, NY, USA, 2011. ACM. Paper doi abstract bibtex The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.
@inproceedings{Sarvadevabhatla:2011:AFE:2070481.2070488,
author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},
title = {Adaptive facial expression recognition using inter-modal top-down context},
booktitle = {Proceedings of the 13th international conference on multimodal interfaces},
series = {ICMI '11},
year = {2011},
isbn = {978-1-4503-0641-6},
location = {Alicante, Spain},
pages = {27--34},
numpages = {8},
url = {http://doi.acm.org/10.1145/2070481.2070488},
doi = {10.1145/2070481.2070488},
acmid = {2070488},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},
URL = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf},
abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.},
}
Downloads: 0
{"_id":"mjS8y5zQrgJXawiB6","bibbaseid":"sarvadevabhatla-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011","downloads":0,"creationDate":"2015-12-06T14:27:16.112Z","title":"Adaptive facial expression recognition using inter-modal top-down context","author_short":["Sarvadevabhatla, R. K.","Benovoy, M.","Musallam, S.","Ng-Thow-Hing, V."],"year":2011,"bibtype":"inproceedings","biburl":"http://npl.mcgill.ca/npl_pub.bib","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"propositions":[],"lastnames":["Sarvadevabhatla"],"firstnames":["Ravi","Kiran"],"suffixes":[]},{"propositions":[],"lastnames":["Benovoy"],"firstnames":["Mitchel"],"suffixes":[]},{"propositions":[],"lastnames":["Musallam"],"firstnames":["Sam"],"suffixes":[]},{"propositions":[],"lastnames":["Ng-Thow-Hing"],"firstnames":["Victor"],"suffixes":[]}],"title":"Adaptive facial expression recognition using inter-modal top-down context","booktitle":"Proceedings of the 13th international conference on multimodal interfaces","series":"ICMI '11","year":"2011","isbn":"978-1-4503-0641-6","location":"Alicante, Spain","pages":"27–34","numpages":"8","url":"http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf","doi":"10.1145/2070481.2070488","acmid":"2070488","publisher":"ACM","address":"New York, NY, USA","keywords":"context, facial expression recognition, human-computer interaction, mask, multi-modal","abstract":"The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.","bibtex":"@inproceedings{Sarvadevabhatla:2011:AFE:2070481.2070488,\r\n author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},\r\n title = {Adaptive facial expression recognition using inter-modal top-down context},\r\n booktitle = {Proceedings of the 13th international conference on multimodal interfaces},\r\n series = {ICMI '11},\r\n year = {2011},\r\n isbn = {978-1-4503-0641-6},\r\n location = {Alicante, Spain},\r\n pages = {27--34},\r\n numpages = {8},\r\n url = {http://doi.acm.org/10.1145/2070481.2070488},\r\n doi = {10.1145/2070481.2070488},\r\n acmid = {2070488},\r\n publisher = {ACM},\r\n address = {New York, NY, USA},\r\n keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},\r\n URL = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf},\r\n abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.},\r\n} \r\n\r\n","author_short":["Sarvadevabhatla, R. K.","Benovoy, M.","Musallam, S.","Ng-Thow-Hing, V."],"key":"Sarvadevabhatla:2011:AFE:2070481.2070488","id":"Sarvadevabhatla:2011:AFE:2070481.2070488","bibbaseid":"sarvadevabhatla-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011","role":"author","urls":{"Paper":"http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf"},"keyword":["context","facial expression recognition","human-computer interaction","mask","multi-modal"],"downloads":0,"html":""},"search_terms":["adaptive","facial","expression","recognition","using","inter","modal","top","down","context","sarvadevabhatla","benovoy","musallam","ng-thow-hing"],"keywords":["context","facial expression recognition","human-computer interaction","mask","multi-modal"],"authorIDs":["3APtrXiH9D6Sugg5S","5456e6c28b01c8193000003d","5de6ee04eab4b7de010000b8","5df85284a0ca62df01000155","5dfbfd85b371afde01000035","5e038868d6cccbdf010000a9","5e03f786ee776fdf01000092","5e044647705486df01000072","5e044c4c705486df010000a8","5e0d4034ae5827df01000034","5e16c634dc7739de0100018b","5e1f735de8f5ddde01000218","5e282ccce6485dde0100005e","5e2fdca34e91a9df01000040","5e3248d5e45eb5df01000098","5e38fec9dc5b8ade01000075","5e3cce3e5cd237de01000015","5e449674ec14b3de0100013e","5e47c30a6c5c7ede010000c5","5e4c85cf5cc521f20100002b","5e4f26b3aa67a8de0100001d","5e5519660096f9de0100004d","5e5aaa3c6ec9eadf010000fc","5e65ab90d92058de01000012","5uANKngwQNgDBJk48","63rnshvzHwPkccxCf","67Qb7nkpgYzGmWAw6","7XMCNhjvTzuQqoZDu","DfNkEqhogCcLbR5RZ","J72QG3kPqnsB5TNzn","KZXG3SLgMFcG2YjRt","N3CLXkjiZrQL56j2a","N4Zn2xfmMtrdXCWqi","NjdfgtcJmCo8qxJAu","QyiqKQyXiaQD8k8Zw","RZc9oSG6uEs9KJzBq","S3snNRagphP9JnhgM","Sh2NJyfRbYhDGzm5L","X8T8fzH4bgF6Paikt","X8mHogWqn7f4r9Fn3","YB8dtNxzhSvqQdPQN","aAB2maSnND2hLoWsS","aNsvE4ywwuxv3qKYG","dnHFuQZPga8cmyBgQ","imSFWwhPTSKo9C7qC","jBerWPDjJqaKYgAbF","nL6XK8skXzQuJx74W","qFxLc4SHtkrxviF4m","qNnPZGcibfD4WfsvM","qpkyYRmkY9AAZQ7jp","sWf3nFEZ3ueLQQEou","sgkCP3aEkHRYDEgic","tCaQuQGCLsWwe5Yaw","uE9esaoKnfDqXKvnq","uKcmsym9Db5Rg5k7x","urAPRxM9nDukL2izo"],"dataSources":["2uX35iRShYyWRpd2E"]}