Model for predicting perception of facial action unit activation using virtual humans. McDonnell, R., Zibrek, K., Carrigan, E., & Dahyot, R. Computers & Graphics , 100:81-92, 2021. Winner 2022 Graphics Replicability Stamp Initiative (GRSI) best paper award; Github:
Model for predicting perception of facial action unit activation using virtual humans [pdf]Paper  doi  abstract   bibtex   5 downloads  
Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. However, storing and manipulating large numbers of facial meshes (blendshapes) is costly in terms of memory and computation for gaming applications. Blendshape rigs are comprised of sets of semantically-meaningful expressions, which govern how expressive the character will be, often based on Action Units from the Facial Action Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has shown that our brains process faces differently than other objects so we postulate that the perception of facial expressions will be feature-dependent rather than based purely on the amount of movement required to make the expression. Therefore, we believe that perception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we explore the noticeability of blendshapes under different activation levels, and present new perceptually-based models to predict perceptual importance of blendshapes. The models predict visibility based on commonly-used geometry and image-based metrics.

Downloads: 5