Decoding Driver Intention Cues: Exploring Non-verbal Communication for Human-Centered Automotive Interfaces. Faramarzian, M., Pardo, J., Mandel, I., Rakotonirainy, A., Ju, W., & Schroeter, R. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, of CHI '25, New York, NY, USA, 2025. Association for Computing Machinery.
Decoding Driver Intention Cues: Exploring Non-verbal Communication for Human-Centered Automotive Interfaces [link]Paper  doi  abstract   bibtex   
In emerging "driver-less" automated vehicles (AVs), the intuitive communication that exists between human drivers and passengers no longer exists, which can lead to reduced trust and acceptance in passengers if they are unclear about what the AV intends to do. This paper contributes the foundational understanding of how passengers naturally decode drivers’ non-verbal cues about their intended action to inform intuitive Human-Machine Interface (HMI) designs that try to emulate those cues. Our study investigates what cues passengers perceive, their saliency, and interpretation through a mixed-method approach combining field observations, experience sampling, and auto-confrontation interviews with 30 driver-passenger pairs. Analysis of posture, head/eye movements, and vestibular sensations revealed four categories of intention cues: awareness, interaction, vestibular, and habitual. These findings provide empirical foundations for designing AV interfaces that mirror natural human communication patterns. We discuss implications for designing anthropomorphic HMIs that could enhance trust, predictability, and user experience in AVs.
@inproceedings{10.1145/3706598.3713635,
author = {Faramarzian, Mohammad and Pardo, Jorge and Mandel, Ilan and Rakotonirainy, Andry and Ju, Wendy and Schroeter, Ronald},
title = {Decoding Driver Intention Cues: Exploring Non-verbal Communication for Human-Centered Automotive Interfaces},
year = {2025},
isbn = {9798400713941},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3706598.3713635},
doi = {10.1145/3706598.3713635},
abstract = {In emerging "driver-less" automated vehicles (AVs), the intuitive communication that exists between human drivers and passengers no longer exists, which can lead to reduced trust and acceptance in passengers if they are unclear about what the AV intends to do. This paper contributes the foundational understanding of how passengers naturally decode drivers’ non-verbal cues about their intended action to inform intuitive Human-Machine Interface (HMI) designs that try to emulate those cues. Our study investigates what cues passengers perceive, their saliency, and interpretation through a mixed-method approach combining field observations, experience sampling, and auto-confrontation interviews with 30 driver-passenger pairs. Analysis of posture, head/eye movements, and vestibular sensations revealed four categories of intention cues: awareness, interaction, vestibular, and habitual. These findings provide empirical foundations for designing AV interfaces that mirror natural human communication patterns. We discuss implications for designing anthropomorphic HMIs that could enhance trust, predictability, and user experience in AVs.},
booktitle = {Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems},
articleno = {782},
numpages = {13},
keywords = {Intention Cues, HMIs, Human-centric Design, Implicit Interaction, Anthropomorphism},
location = {
},
series = {CHI '25}
}

Downloads: 0