Gaze-contingent Auditory Displays for Improved Spatial Attention. Vinnikov, M., Allison, R. S., & Fernandes, S. ACM TOCHI, 24(3):19.1-19.38, 2017. -1 doi abstract bibtex Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.
@article{Vinnikov:sf,
abstract = {Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.},
author = {Vinnikov, M. and Allison, R. S. and Fernandes, S.},
booktitle = {ACM CHI},
date-added = {2016-12-30 04:23:15 +0000},
date-modified = {2017-05-08 02:05:25 +0000},
doi = {10.1145/3067822},
journal = {{ACM TOCHI}},
keywords = {Eye Movements & Tracking},
number = {3},
pages = {19.1-19.38},
title = {Gaze-contingent Auditory Displays for Improved Spatial Attention},
url-1 = {http://dx.doi.org/10.1145/3067822},
volume = {24},
year = {2017},
url-1 = {https://doi.org/10.1145/3067822}}
Downloads: 0
{"_id":"7i6apNZ72gLgnnsYS","bibbaseid":"vinnikov-allison-fernandes-gazecontingentauditorydisplaysforimprovedspatialattention-2017","downloads":0,"creationDate":"2018-11-16T15:32:55.636Z","title":"Gaze-contingent Auditory Displays for Improved Spatial Attention","author_short":["Vinnikov, M.","Allison, R. S.","Fernandes, S."],"year":2017,"bibtype":"article","biburl":"https://bibbase.org/network/files/ibWG96BS4w7ibooE9","bibdata":{"bibtype":"article","type":"article","abstract":"Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.","author":[{"propositions":[],"lastnames":["Vinnikov"],"firstnames":["M."],"suffixes":[]},{"propositions":[],"lastnames":["Allison"],"firstnames":["R.","S."],"suffixes":[]},{"propositions":[],"lastnames":["Fernandes"],"firstnames":["S."],"suffixes":[]}],"booktitle":"ACM CHI","date-added":"2016-12-30 04:23:15 +0000","date-modified":"2017-05-08 02:05:25 +0000","doi":"10.1145/3067822","journal":"ACM TOCHI","keywords":"Eye Movements & Tracking","number":"3","pages":"19.1-19.38","title":"Gaze-contingent Auditory Displays for Improved Spatial Attention","url-1":"https://doi.org/10.1145/3067822","volume":"24","year":"2017","bibtex":"@article{Vinnikov:sf,\n\tabstract = {Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.},\n\tauthor = {Vinnikov, M. and Allison, R. S. and Fernandes, S.},\n\tbooktitle = {ACM CHI},\n\tdate-added = {2016-12-30 04:23:15 +0000},\n\tdate-modified = {2017-05-08 02:05:25 +0000},\n\tdoi = {10.1145/3067822},\n\tjournal = {{ACM TOCHI}},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {3},\n\tpages = {19.1-19.38},\n\ttitle = {Gaze-contingent Auditory Displays for Improved Spatial Attention},\n\turl-1 = {http://dx.doi.org/10.1145/3067822},\n\tvolume = {24},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1145/3067822}}\n\n\n\n","author_short":["Vinnikov, M.","Allison, R. S.","Fernandes, S."],"key":"Vinnikov:sf","id":"Vinnikov:sf","bibbaseid":"vinnikov-allison-fernandes-gazecontingentauditorydisplaysforimprovedspatialattention-2017","role":"author","urls":{"-1":"https://doi.org/10.1145/3067822"},"keyword":["Eye Movements & Tracking"],"metadata":{"authorlinks":{"allison, r":"https://percept.eecs.yorku.ca/bibase%20pubs.shtml"}},"downloads":0},"search_terms":["gaze","contingent","auditory","displays","improved","spatial","attention","vinnikov","allison","fernandes"],"keywords":["eye movements & tracking"],"authorIDs":["vnY8GQ5AKXHNi7dqd"],"dataSources":["kmmXSosvtyJQxBtzs","BPKPSXjrbMGteC59J","MpMK4SvZzj5Fww5vJ","YbBWRH5Fc7xRr8ghk","szZaibkmSiiQBFQG8","DoyrDTpJ7HHCtki3q","JaoxzeTFRfvwgLoCW","XKwRm5Lx8Z9bzSzaP","AELuRZBpnp7nRDaqw"]}