Model for predicting perception of facial action unit activation using virtual humans. McDonnell, R., Zibrek, K., Carrigan, E., & Dahyot, R. Computers & Graphics , 100:81-92, 2021. Winner 2022 Graphics Replicability Stamp Initiative (GRSI) best paper award; Github: https://github.com/Roznn/facial-blendshapes
Paper doi abstract bibtex 5 downloads Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. However, storing and manipulating large numbers of facial meshes (blendshapes) is costly in terms of memory and computation for gaming applications. Blendshape rigs are comprised of sets of semantically-meaningful expressions, which govern how expressive the character will be, often based on Action Units from the Facial Action Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has shown that our brains process faces differently than other objects so we postulate that the perception of facial expressions will be feature-dependent rather than based purely on the amount of movement required to make the expression. Therefore, we believe that perception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we explore the noticeability of blendshapes under different activation levels, and present new perceptually-based models to predict perceptual importance of blendshapes. The models predict visibility based on commonly-used geometry and image-based metrics.
@article{McDonnell2021,
title = {Model for predicting perception of facial action unit activation using virtual humans},
journal = {Computers \& Graphics },
doi = {10.1016/j.cag.2021.07.022},
volume = {100},
pages = {81-92},
year = {2021},
note = {Winner 2022 Graphics Replicability Stamp Initiative (GRSI) best paper award; Github: https://github.com/Roznn/facial-blendshapes},
issn = {0097-8493},
url = {https://roznn.github.io/facial-blendshapes/CAG2021.pdf},
author = {Rachel McDonnell and Katja Zibrek and Emma Carrigan and Rozenn Dahyot},
keywords = {facial action unit, perception, virtual character},
abstract = {Blendshape facial rigs are used extensively in the industry for facial animation of
virtual humans. However, storing and manipulating large numbers of facial meshes
(blendshapes) is costly in terms of memory and computation for gaming applications.
Blendshape rigs are comprised of sets of semantically-meaningful expressions, which
govern how expressive the character will be, often based on Action Units from the Facial
Action Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has
shown that our brains process faces differently than other objects so we postulate that
the perception of facial expressions will be feature-dependent rather than based purely
on the amount of movement required to make the expression. Therefore, we believe that
perception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we
explore the noticeability of blendshapes under different activation levels, and present
new perceptually-based models to predict perceptual importance of blendshapes. The
models predict visibility based on commonly-used geometry and image-based metrics.}
}
Downloads: 5
{"_id":"jbwSZ79LawRgduD8w","bibbaseid":"mcdonnell-zibrek-carrigan-dahyot-modelforpredictingperceptionoffacialactionunitactivationusingvirtualhumans-2021","author_short":["McDonnell, R.","Zibrek, K.","Carrigan, E.","Dahyot, R."],"bibdata":{"bibtype":"article","type":"article","title":"Model for predicting perception of facial action unit activation using virtual humans","journal":"Computers & Graphics ","doi":"10.1016/j.cag.2021.07.022","volume":"100","pages":"81-92","year":"2021","note":"Winner 2022 Graphics Replicability Stamp Initiative (GRSI) best paper award; Github: https://github.com/Roznn/facial-blendshapes","issn":"0097-8493","url":"https://roznn.github.io/facial-blendshapes/CAG2021.pdf","author":[{"firstnames":["Rachel"],"propositions":[],"lastnames":["McDonnell"],"suffixes":[]},{"firstnames":["Katja"],"propositions":[],"lastnames":["Zibrek"],"suffixes":[]},{"firstnames":["Emma"],"propositions":[],"lastnames":["Carrigan"],"suffixes":[]},{"firstnames":["Rozenn"],"propositions":[],"lastnames":["Dahyot"],"suffixes":[]}],"keywords":"facial action unit, perception, virtual character","abstract":"Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. However, storing and manipulating large numbers of facial meshes (blendshapes) is costly in terms of memory and computation for gaming applications. Blendshape rigs are comprised of sets of semantically-meaningful expressions, which govern how expressive the character will be, often based on Action Units from the Facial Action Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has shown that our brains process faces differently than other objects so we postulate that the perception of facial expressions will be feature-dependent rather than based purely on the amount of movement required to make the expression. Therefore, we believe that perception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we explore the noticeability of blendshapes under different activation levels, and present new perceptually-based models to predict perceptual importance of blendshapes. The models predict visibility based on commonly-used geometry and image-based metrics.","bibtex":"@article{McDonnell2021,\ntitle = {Model for predicting perception of facial action unit activation using virtual humans},\njournal = {Computers \\& Graphics }, \ndoi = {10.1016/j.cag.2021.07.022},\nvolume = {100}, \npages = {81-92}, \nyear = {2021}, \nnote = {Winner 2022 Graphics Replicability Stamp Initiative (GRSI) best paper award; Github: https://github.com/Roznn/facial-blendshapes}, \nissn = {0097-8493},\nurl = {https://roznn.github.io/facial-blendshapes/CAG2021.pdf}, \nauthor = {Rachel McDonnell and Katja Zibrek and Emma Carrigan and Rozenn Dahyot}, \nkeywords = {facial action unit, perception, virtual character},\nabstract = {Blendshape facial rigs are used extensively in the industry for facial animation of\nvirtual humans. However, storing and manipulating large numbers of facial meshes\n(blendshapes) is costly in terms of memory and computation for gaming applications.\nBlendshape rigs are comprised of sets of semantically-meaningful expressions, which\ngovern how expressive the character will be, often based on Action Units from the Facial\nAction Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has\nshown that our brains process faces differently than other objects so we postulate that\nthe perception of facial expressions will be feature-dependent rather than based purely\non the amount of movement required to make the expression. Therefore, we believe that\nperception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we\nexplore the noticeability of blendshapes under different activation levels, and present\nnew perceptually-based models to predict perceptual importance of blendshapes. The\nmodels predict visibility based on commonly-used geometry and image-based metrics.}\n}","author_short":["McDonnell, R.","Zibrek, K.","Carrigan, E.","Dahyot, R."],"key":"McDonnell2021","id":"McDonnell2021","bibbaseid":"mcdonnell-zibrek-carrigan-dahyot-modelforpredictingperceptionoffacialactionunitactivationusingvirtualhumans-2021","role":"author","urls":{"Paper":"https://roznn.github.io/facial-blendshapes/CAG2021.pdf"},"keyword":["facial action unit","perception","virtual character"],"metadata":{"authorlinks":{}},"downloads":5},"bibtype":"article","biburl":"https://raw.githubusercontent.com/Roznn/Roznn.github.io/master/works.bib","dataSources":["LyGNQYGRFw8k9gPrK","dtJ7afty6nTMHhqAE","HtFtxsuf6j4Ensd9G","Jspzw8BckMpqxabqN"],"keywords":["facial action unit","perception","virtual character"],"search_terms":["model","predicting","perception","facial","action","unit","activation","using","virtual","humans","mcdonnell","zibrek","carrigan","dahyot"],"title":"Model for predicting perception of facial action unit activation using virtual humans","year":2021,"downloads":5}