Flow parsing gain depends on self- and object-motion directions. Guo, H. & Allison, R. S. In Visually Induced motion sensations (VIMS 2924), pages 18. 2024.
-1 abstract bibtex When we move through the environment, the egocentric direction of objects changes producing an optic flow. Optic flow must be decomposed to perceive object motion during self-motion (Rushton & Warren, Current Biology, 15(14), 542–543, 2005), a process called flow parsing. Most real and realistic VR environments contain abundant depth and distance cues, including size and binocular disparities. Little is known about their roles in flow parsing. We designed two experiments with our wide-field stereoscopic environment. Participants observed target motions during visually simulated self-motion and indicated the direction of target motion with respect to the scene depicting a large room (Experiment 1) or a cluster of 3D objects (Experiment 2). Both sagittal and lateral target motions and self-motions were simulated. During lateral locomotion through both environments, flow parsing gains were significantly lower for laterally compared to sagittally moving targets. However, during sagittal locomotion, laterally moving targets had much higher flow parsing gains than sagittally moving targets. Geometrically, collision with an eccentric target is only possible if they move orthogonally to the direction of self-motion, which might explain the sensitivity toward these motions. The perception of possible contact stems from the visual cues of distance and depth, such as binocular disparity and object size, and the change in these signals (e.g. looming, change in disparity, interocular velocity difference). This means that depth and distance cues such as binocular disparity and object size may have important roles in perceiving world-relative object motion during self-motion.
@incollection{Guo:2024xw,
abstract = {When we move through the environment, the egocentric direction of objects changes producing an optic flow. Optic flow must be decomposed to perceive object motion during self-motion (Rushton & Warren, Current Biology, 15(14), 542--543, 2005), a process called flow parsing. Most real and realistic VR environments contain abundant depth and distance cues, including size and binocular disparities. Little is known about their roles in flow parsing. We designed two experiments with our wide-field stereoscopic environment. Participants observed target motions during visually simulated self-motion and indicated the direction of target motion with respect to the scene depicting a large room (Experiment 1) or a cluster of 3D objects (Experiment 2). Both sagittal and lateral target motions and self-motions were simulated. During lateral locomotion through both environments, flow parsing gains were significantly lower for laterally compared to sagittally moving targets. However, during sagittal locomotion, laterally moving targets had much higher flow parsing gains than sagittally moving targets. Geometrically, collision with an eccentric target is only possible if they move orthogonally to the direction of self-motion, which might explain the sensitivity toward these motions. The perception of possible contact stems from the visual cues of distance and depth, such as binocular disparity and object size, and the change in these signals (e.g. looming, change in disparity, interocular velocity difference). This means that depth and distance cues such as binocular disparity and object size may have important roles in perceiving world-relative object motion during self-motion.},
annote = {VIMS 2024 Oct 20-22, 2024 Toronto},
author = {Guo, H. and Allison, R. S.},
booktitle = {Visually Induced motion sensations (VIMS 2924)},
date-added = {2024-11-20 10:32:02 -0500},
date-modified = {2024-11-20 10:32:02 -0500},
keywords = {Optic flow & Self Motion (also Locomotion & Aviation)},
pages = {18},
title = {Flow parsing gain depends on self- and object-motion directions},
url-1 = {https://www.vims2024.com/_files/ugd/a6c816_fbf66ebe0b61456fb193da233c8ebd40.pdf},
year = {2024}}
Downloads: 0
{"_id":"mB9Y4JFup6LdqwBq8","bibbaseid":"guo-allison-flowparsinggaindependsonselfandobjectmotiondirections-2024","author_short":["Guo, H.","Allison, R. S."],"bibdata":{"bibtype":"incollection","type":"incollection","abstract":"When we move through the environment, the egocentric direction of objects changes producing an optic flow. Optic flow must be decomposed to perceive object motion during self-motion (Rushton & Warren, Current Biology, 15(14), 542–543, 2005), a process called flow parsing. Most real and realistic VR environments contain abundant depth and distance cues, including size and binocular disparities. Little is known about their roles in flow parsing. We designed two experiments with our wide-field stereoscopic environment. Participants observed target motions during visually simulated self-motion and indicated the direction of target motion with respect to the scene depicting a large room (Experiment 1) or a cluster of 3D objects (Experiment 2). Both sagittal and lateral target motions and self-motions were simulated. During lateral locomotion through both environments, flow parsing gains were significantly lower for laterally compared to sagittally moving targets. However, during sagittal locomotion, laterally moving targets had much higher flow parsing gains than sagittally moving targets. Geometrically, collision with an eccentric target is only possible if they move orthogonally to the direction of self-motion, which might explain the sensitivity toward these motions. The perception of possible contact stems from the visual cues of distance and depth, such as binocular disparity and object size, and the change in these signals (e.g. looming, change in disparity, interocular velocity difference). This means that depth and distance cues such as binocular disparity and object size may have important roles in perceiving world-relative object motion during self-motion.","annote":"VIMS 2024 Oct 20-22, 2024 Toronto","author":[{"propositions":[],"lastnames":["Guo"],"firstnames":["H."],"suffixes":[]},{"propositions":[],"lastnames":["Allison"],"firstnames":["R.","S."],"suffixes":[]}],"booktitle":"Visually Induced motion sensations (VIMS 2924)","date-added":"2024-11-20 10:32:02 -0500","date-modified":"2024-11-20 10:32:02 -0500","keywords":"Optic flow & Self Motion (also Locomotion & Aviation)","pages":"18","title":"Flow parsing gain depends on self- and object-motion directions","url-1":"https://www.vims2024.com/_files/ugd/a6c816_fbf66ebe0b61456fb193da233c8ebd40.pdf","year":"2024","bibtex":"@incollection{Guo:2024xw,\n\tabstract = {When we move through the environment, the egocentric direction of objects changes producing an optic flow. Optic flow must be decomposed to perceive object motion during self-motion (Rushton & Warren, Current Biology, 15(14), 542--543, 2005), a process called flow parsing. Most real and realistic VR environments contain abundant depth and distance cues, including size and binocular disparities. Little is known about their roles in flow parsing. We designed two experiments with our wide-field stereoscopic environment. Participants observed target motions during visually simulated self-motion and indicated the direction of target motion with respect to the scene depicting a large room (Experiment 1) or a cluster of 3D objects (Experiment 2). Both sagittal and lateral target motions and self-motions were simulated. During lateral locomotion through both environments, flow parsing gains were significantly lower for laterally compared to sagittally moving targets. However, during sagittal locomotion, laterally moving targets had much higher flow parsing gains than sagittally moving targets. Geometrically, collision with an eccentric target is only possible if they move orthogonally to the direction of self-motion, which might explain the sensitivity toward these motions. The perception of possible contact stems from the visual cues of distance and depth, such as binocular disparity and object size, and the change in these signals (e.g. looming, change in disparity, interocular velocity difference). This means that depth and distance cues such as binocular disparity and object size may have important roles in perceiving world-relative object motion during self-motion.},\n\tannote = {VIMS 2024 Oct 20-22, 2024 Toronto},\n\tauthor = {Guo, H. and Allison, R. S.},\n\tbooktitle = {Visually Induced motion sensations (VIMS 2924)},\n\tdate-added = {2024-11-20 10:32:02 -0500},\n\tdate-modified = {2024-11-20 10:32:02 -0500},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {18},\n\ttitle = {Flow parsing gain depends on self- and object-motion directions},\n\turl-1 = {https://www.vims2024.com/_files/ugd/a6c816_fbf66ebe0b61456fb193da233c8ebd40.pdf},\n\tyear = {2024}}\n\n","author_short":["Guo, H.","Allison, R. S."],"key":"Guo:2024xw","id":"Guo:2024xw","bibbaseid":"guo-allison-flowparsinggaindependsonselfandobjectmotiondirections-2024","role":"author","urls":{"-1":"https://www.vims2024.com/_files/ugd/a6c816_fbf66ebe0b61456fb193da233c8ebd40.pdf"},"keyword":["Optic flow & Self Motion (also Locomotion & Aviation)"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"incollection","biburl":"www.cse.yorku.ca/percept/papers/self.bib","dataSources":["2KKYxJNEDKp35ykmq","BPKPSXjrbMGteC59J"],"keywords":["optic flow & self motion (also locomotion & aviation)"],"search_terms":["flow","parsing","gain","depends","self","object","motion","directions","guo","allison"],"title":"Flow parsing gain depends on self- and object-motion directions","year":2024}