Adaptive facial expression recognition using inter-modal top-down context. Sarvadevabhatla, Kiran, R., Benovoy, M., Musallam, S., & Ng-Thow-Hing, V. In Proceedings of the 13th international conference on multimodal interfaces, of ICMI '11, pages 27--34, New York, NY, USA, 2011. ACM. Paper doi abstract bibtex 10 downloads The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.
@inproceedings{ Sarvadevabhatla:2011:AFE:2070481.2070488,
author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},
title = {Adaptive facial expression recognition using inter-modal top-down context},
booktitle = {Proceedings of the 13th international conference on multimodal interfaces},
series = {ICMI '11},
year = {2011},
isbn = {978-1-4503-0641-6},
location = {Alicante, Spain},
pages = {27--34},
numpages = {8},
url = {http://doi.acm.org/10.1145/2070481.2070488},
doi = {10.1145/2070481.2070488},
acmid = {2070488},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},
url = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf},
abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.}
}
Downloads: 10
{"_id":{"_str":"51f2ccadc854c34964000c64"},"__v":3,"authorIDs":[],"author_short":["Sarvadevabhatla","Kiran, R.","Benovoy, M.","Musallam, S.","Ng-Thow-Hing, V."],"bibbaseid":"-kiran-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011","bibdata":{"html":"<div class=\"bibbase_paper\"> \n\n\n<span class=\"bibbase_paper_titleauthoryear\">\n\t<span class=\"bibbase_paper_title\"><a name=\"Sarvadevabhatla:2011:AFE:2070481.2070488\"> </a>Adaptive facial expression recognition using inter-modal top-down context.</span>\n\t<span class=\"bibbase_paper_author\">\nSarvadevabhatla; Kiran, R.; Benovoy, M.; <a class=\"bibbase author link\" href=\"http://npl.mcgill.ca/Publications.php\">Musallam, S.</a>; and Ng-Thow-Hing, V.</span>\n\t<!-- <span class=\"bibbase_paper_year\">2011</span>. -->\n</span>\n\n\n\nIn\n<i>Proceedings of the 13th international conference on multimodal interfaces</i>, of <i>ICMI '11</i>, page 27--34, New York, NY, USA, 2011.\n\n\nACM.\n\n\n\n\n<br class=\"bibbase_paper_content\"/>\n\n<span class=\"bibbase_paper_content\">\n \n \n <!-- <i -->\n <!-- onclick=\"javascript:log_download('-kiran-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011', 'http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf')\">DEBUG -->\n <!-- </i> -->\n\n <a href=\"http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf\"\n onclick=\"javascript:log_download('-kiran-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011', 'http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf')\">\n <img src=\"http://www.bibbase.org/img/filetypes/pdf.png\"\n\t alt=\"Adaptive facial expression recognition using inter-modal top-down context [.pdf]\" \n\t class=\"bibbase_icon\"\n\t style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\" ><span class=\"bibbase_icon_text\">Paper</span></a> \n \n \n \n <a href=\"javascript:showBib('Sarvadevabhatla:2011:AFE:2070481.2070488')\"\n class=\"bibbase link\">\n <!-- <img src=\"http://www.bibbase.org/img/filetypes/bib.png\" -->\n\t<!-- alt=\"Adaptive facial expression recognition using inter-modal top-down context [bib]\" -->\n\t<!-- class=\"bibbase_icon\" -->\n\t<!-- style=\"width: 24px; height: 24px; border: 0px; vertical-align: text-top\"><span class=\"bibbase_icon_text\">Bibtex</span> -->\n BibTeX\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n <a class=\"bibbase_abstract_link bibbase link\"\n href=\"javascript:showAbstract('Sarvadevabhatla:2011:AFE:2070481.2070488')\">\n Abstract\n <i class=\"fa fa-caret-down\"></i></a>\n \n \n \n\n \n \n <span class=\"bibbase_stats_paper\" style=\"color: #777;\">\n <span>10 downloads</span>\n </span>\n \n \n \n</span>\n\n<div class=\"well well-small bibbase\" id=\"bib_Sarvadevabhatla_2011_AFE_2070481_2070488\"\n style=\"display:none\">\n <pre>@inproceedings{ Sarvadevabhatla:2011:AFE:2070481.2070488,\n author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},\n title = {Adaptive facial expression recognition using inter-modal top-down context},\n booktitle = {Proceedings of the 13th international conference on multimodal interfaces},\n series = {ICMI '11},\n year = {2011},\n isbn = {978-1-4503-0641-6},\n location = {Alicante, Spain},\n pages = {27--34},\n numpages = {8},\n url = {http://doi.acm.org/10.1145/2070481.2070488},\n doi = {10.1145/2070481.2070488},\n acmid = {2070488},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},\n url = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf},\n abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.}\n}</pre>\n</div>\n\n\n<div class=\"well well-small bibbase\" id=\"abstract_Sarvadevabhatla_2011_AFE_2070481_2070488\"\n style=\"display:none\">\n The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.\n</div>\n\n\n</div>\n","downloads":10,"keyword":["context","facial expression recognition","human-computer interaction","mask","multi-modal"],"abstract":"The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.","acmid":"2070488","address":"New York, NY, USA","author":["Sarvadevabhatla","Kiran, Ravi","Benovoy, Mitchel","Musallam, Sam","Ng-Thow-Hing, Victor"],"author_short":["Sarvadevabhatla","Kiran, R.","Benovoy, M.","Musallam, S.","Ng-Thow-Hing, V."],"bibtex":"@inproceedings{ Sarvadevabhatla:2011:AFE:2070481.2070488,\n author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},\n title = {Adaptive facial expression recognition using inter-modal top-down context},\n booktitle = {Proceedings of the 13th international conference on multimodal interfaces},\n series = {ICMI '11},\n year = {2011},\n isbn = {978-1-4503-0641-6},\n location = {Alicante, Spain},\n pages = {27--34},\n numpages = {8},\n url = {http://doi.acm.org/10.1145/2070481.2070488},\n doi = {10.1145/2070481.2070488},\n acmid = {2070488},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},\n url = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf},\n abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.}\n}","bibtype":"inproceedings","booktitle":"Proceedings of the 13th international conference on multimodal interfaces","doi":"10.1145/2070481.2070488","id":"Sarvadevabhatla:2011:AFE:2070481.2070488","isbn":"978-1-4503-0641-6","key":"Sarvadevabhatla:2011:AFE:2070481.2070488","keywords":"context, facial expression recognition, human-computer interaction, mask, multi-modal","location":"Alicante, Spain","numpages":"8","pages":"27--34","publisher":"ACM","series":"ICMI '11","title":"Adaptive facial expression recognition using inter-modal top-down context","type":"inproceedings","url":"http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf","year":"2011","role":"author","urls":{"Paper":"http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf"},"bibbaseid":"-kiran-benovoy-musallam-ngthowhing-adaptivefacialexpressionrecognitionusingintermodaltopdowncontext-2011"},"bibtype":"inproceedings","biburl":"http://npl.mcgill.ca/npl_pub.bib","downloads":10,"search_terms":["adaptive","facial","expression","recognition","using","inter","modal","top","down","context","sarvadevabhatla","kiran","benovoy","musallam","ng-thow-hing"],"title":"Adaptive facial expression recognition using inter-modal top-down context","title_words":["adaptive","facial","expression","recognition","using","inter","modal","top","down","context"],"versions":[],"year":2011,"dataSources":["2uX35iRShYyWRpd2E"]}