var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=www.cse.yorku.ca%2Fpercept%2Fpapers%2Fself.bib&group0=year&group1=type&folding=1&commas=true&noBootstrap=1&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=www.cse.yorku.ca%2Fpercept%2Fpapers%2Fself.bib&group0=year&group1=type&folding=1&commas=true&noBootstrap=1&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=www.cse.yorku.ca%2Fpercept%2Fpapers%2Fself.bib&group0=year&group1=type&folding=1&commas=true&noBootstrap=1&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Testing `differences in virtual and physical head pose' and `subjective vertical conflict' accounts of cybersickness.\n \n \n \n \n\n\n \n Palmisano, S., Stephenson, L., Davies, R. G, Kim, J., & Allison, R. S\n\n\n \n\n\n\n Virtual Reality, 28(22): 22.1-22.28. 2024.\n \n\n\n\n
\n\n\n\n \n \n \"Testing-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:rs,\n\tabstract = {When we move our head while in virtual reality, the display lag will generate differences in virtual and physical head pose (known as DVP).  While DVP are a major trigger for cybersickness, theories differ as to exactly how they constitute a provocative sensory conflict.  Here we test two competing theories: the subjective vertical conflict theory and the DVP hypothesis.  Thirty-two HMD users made continuous, oscillatory head rotations in either pitch or yaw while viewing a large virtual room.  Additional display lag was applied selectively to the simulation about the same, or an orthogonal, axis to the instructed head rotation (generating Yaw-Lag+Yaw-Move, Yaw-Lag+PitchMove, Pitch-Lag+Yaw-Move, and Pitch-Lag+Pitch-Move conditions).  At the end of each trial: 1) participants rated their sickness severity and scene instability; and 2) their head tracking data was used to estimate DVP throughout the trial.  Consistent with our DVP hypothesis, but contrary to subjective vertical conflict theory, YawLag+Yaw-Move conditions induced significant cybersickness, which was similar in magnitude to that in the PitchLag+Pitch-Move conditions.  When extra lag was added along the same axis as the instructed head movement, DVP was found to predict 73 to 76\\% of the variance in sickness severity (with measures of the spatial magnitude and the temporal dynamics of the DVP both contributing significantly). Ratings of scene instability were also found to predict sickness severity. Taken together, these findings suggest that: 1) cybersickness can be predicted from objective estimates of the DVP; and 2) provocative stimuli for this sickness can be identified from subjective reports of scene instability. },\n\tauthor = {Stephen Palmisano and Lance Stephenson and Rodney G Davies and Juno Kim and Robert S Allison},\n\tdate-added = {2023-10-07 19:57:43 -0400},\n\tdate-modified = {2024-01-21 10:39:23 -0500},\n\tdoi = {10.1007/s10055-023-00909-6},\n\tjournal = {Virtual Reality},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {22},\n\tpages = {22.1-22.28},\n\ttitle = {Testing `differences in virtual and physical head pose' and `subjective vertical conflict' accounts of cybersickness},\n\tvolume = {28},\n\tyear = {2024},\n\turl-1 = {https://doi.org/10.1007/s10055-023-00909-6}}\n\n
\n
\n\n\n
\n When we move our head while in virtual reality, the display lag will generate differences in virtual and physical head pose (known as DVP). While DVP are a major trigger for cybersickness, theories differ as to exactly how they constitute a provocative sensory conflict. Here we test two competing theories: the subjective vertical conflict theory and the DVP hypothesis. Thirty-two HMD users made continuous, oscillatory head rotations in either pitch or yaw while viewing a large virtual room. Additional display lag was applied selectively to the simulation about the same, or an orthogonal, axis to the instructed head rotation (generating Yaw-Lag+Yaw-Move, Yaw-Lag+PitchMove, Pitch-Lag+Yaw-Move, and Pitch-Lag+Pitch-Move conditions). At the end of each trial: 1) participants rated their sickness severity and scene instability; and 2) their head tracking data was used to estimate DVP throughout the trial. Consistent with our DVP hypothesis, but contrary to subjective vertical conflict theory, YawLag+Yaw-Move conditions induced significant cybersickness, which was similar in magnitude to that in the PitchLag+Pitch-Move conditions. When extra lag was added along the same axis as the instructed head movement, DVP was found to predict 73 to 76% of the variance in sickness severity (with measures of the spatial magnitude and the temporal dynamics of the DVP both contributing significantly). Ratings of scene instability were also found to predict sickness severity. Taken together, these findings suggest that: 1) cybersickness can be predicted from objective estimates of the DVP; and 2) provocative stimuli for this sickness can be identified from subjective reports of scene instability. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effects of constant and time-varying display lag on DVP and cybersickness when making head-movements in virtual reality.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R. S., Davies, R. G, Wagner, P., & Kim, J.\n\n\n \n\n\n\n International Journal of Human-Computer Interaction. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:0aa,\n\tannote = {Poitiers, France},\n\tauthor = {Stephen Palmisano and Robert S. Allison and Rodney G Davies and Peter Wagner and Juno Kim},\n\tdate-added = {2023-11-30 07:30:35 -0500},\n\tdate-modified = {2024-01-21 10:43:38 -0500},\n\tdoi = {10.1080/10447318.2023.2291613},\n\tjournal = {International Journal of Human-Computer Interaction},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Effects of constant and time-varying display lag on DVP and cybersickness when making head-movements in virtual reality},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1080/10447318.2023.2291613}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceiving depth and motion in depth from successive occlusion.\n \n \n \n \n\n\n \n Lee, A. R. I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 23(12): Article 2. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Perceiving-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Lee:os,\n\tauthor = {Abigail R. I. Lee and Laurie M. Wilcox and Robert S. Allison},\n\tdate-added = {2023-10-05 15:29:53 -0400},\n\tdate-modified = {2023-10-05 15:29:53 -0400},\n\tdoi = {10.1167/jov.23.12.2},\n\tjournal = {Journal of Vision},\n\tkeywords = {Depth perception},\n\tnumber = {12},\n\tpages = {Article 2},\n\ttitle = {Perceiving depth and motion in depth from successive occlusion},\n\tvolume = {23},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1167/jov.23.12.2}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vection underwater illustrates the limitations of neutral buoyancy as a microgravity analog.\n \n \n \n \n\n\n \n Bury, N., Jenkin, M. R. M., Allison, R. S., Herpers, R., & Harris, L. R.\n\n\n \n\n\n\n NPJ Microgravity, 9: 42.1-42.10. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Vection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Bury:oy,\n\tauthor = {Bury, N. and Jenkin, M. R. M. and Allison, R. S. and Herpers, R. and Harris, L. R.},\n\tdate-added = {2023-06-10 06:26:05 -0400},\n\tdate-modified = {2023-06-10 06:26:05 -0400},\n\tdoi = {10.1038/s41526-023-00282-3},\n\tjournal = {NPJ Microgravity},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {42.1-42.10},\n\ttitle = {Vection underwater illustrates the limitations of neutral buoyancy as a microgravity analog},\n\tvolume = {9},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1038/s41526-023-00282-3}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Differences in virtual and physical head orientation predict sickness during active head-mounted display-based virtual reality.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R. S., Teixeira, J., & Kim, J.\n\n\n \n\n\n\n Virtual Reality, 27(2). 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Differences-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{palmisano_differences_2022,\n\tabstract = {During head-mounted display (HMD)-based virtual reality (VR), head movements and motion-to-photon-based display lag generate differences in our virtual and physical head pose (referred to as DVP). We propose that large-amplitude, time-varying patterns of DVP serve as the primary trigger for cybersickness under such conditions. We test this hypothesis by measuring the sickness and estimating the DVP experienced under different levels of experimentally imposed display lag (ranging from 0 to 222 ms on top of the VR system's 4 ms baseline lag).  On each trial, seated participants made continuous, oscillatory head rotations in yaw, pitch or roll while viewing a large virtual room with an Oculus Rift CV1 HMD (head movements were timed to a computer-generated metronome set at either 1.0 or 0.5 Hz). After the experiment, their head-tracking data were used to objectively estimate the DVP during each trial. The mean, peak, and standard deviation of these DVP data were then compared to the participant's cybersickness ratings for that trial. Irrespective of the axis, or the speed, of the participant's head movements, the severity of their cybersickness was found to increase with each of these three DVP summary measures. In line with our DVP hypothesis, cybersickness consistently increased with the amplitude and the variability of our participants' DVP. DVP similarly predicted their conscious experiences during HMD VR---such as the strength of their feelings of spatial presence and their perception of the virtual scene's stability.},\n\tauthor = {Palmisano, Stephen and Allison, Robert S. and Teixeira, Joel and Kim, Juno},\n\tdate-added = {2022-12-19 08:38:20 -0500},\n\tdate-modified = {2023-10-07 20:07:42 -0400},\n\tdoi = {10.1007/s10055-022-00732-5},\n\tjournal = {Virtual Reality},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\ttitle = {Differences in virtual and physical head orientation predict sickness during active head-mounted display-based virtual reality},\n\tvolume = {27},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1007/s10055-022-00732-5}}\n\n
\n
\n\n\n
\n During head-mounted display (HMD)-based virtual reality (VR), head movements and motion-to-photon-based display lag generate differences in our virtual and physical head pose (referred to as DVP). We propose that large-amplitude, time-varying patterns of DVP serve as the primary trigger for cybersickness under such conditions. We test this hypothesis by measuring the sickness and estimating the DVP experienced under different levels of experimentally imposed display lag (ranging from 0 to 222 ms on top of the VR system's 4 ms baseline lag). On each trial, seated participants made continuous, oscillatory head rotations in yaw, pitch or roll while viewing a large virtual room with an Oculus Rift CV1 HMD (head movements were timed to a computer-generated metronome set at either 1.0 or 0.5 Hz). After the experiment, their head-tracking data were used to objectively estimate the DVP during each trial. The mean, peak, and standard deviation of these DVP data were then compared to the participant's cybersickness ratings for that trial. Irrespective of the axis, or the speed, of the participant's head movements, the severity of their cybersickness was found to increase with each of these three DVP summary measures. In line with our DVP hypothesis, cybersickness consistently increased with the amplitude and the variability of our participants' DVP. DVP similarly predicted their conscious experiences during HMD VR—such as the strength of their feelings of spatial presence and their perception of the virtual scene's stability.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Effect of Gravity on Human Self-Motion Perception: Implications for Space Mission Safety and Training.\n \n \n \n \n\n\n \n Bury, N., Harris, L. R., Jenkin, M. R. M., Allison, R. S., Felsner, S., & Herpers, R.\n\n\n \n\n\n\n In Deutscher Luft- und Raumfahrtkongress. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bury:2023zl,\n\tannote = {67.\tNils-Alexander Bury, Laurence R. Harris2,3, Michael Jenkin, Robert S. Allison, Sandra Felsner1, & Rainer Herpers (2023) The Effect of Gravity on Human Self-Motion Perception: Implications for Space Mission Safety and Training. Deutscher Luft- und Raumfahrtkongress 2023   \n\nhttps://dlrk2023.dglr.de/},\n\tauthor = {Bury, N. and Harris, L. R. and Jenkin, M. R. M. and Allison, R. S. and Felsner, S. and Herpers, R.},\n\tbooktitle = {Deutscher Luft- und Raumfahrtkongress},\n\tdate-added = {2023-11-16 16:00:29 -0500},\n\tdate-modified = {2023-11-16 16:02:05 -0500},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {The Effect of Gravity on Human Self-Motion Perception: Implications for Space Mission Safety and Training},\n\tyear = {2023},\n\turl-1 = {https://teap2022.uni-koeln.de/sites/teap2022/user_upload/TeaP2022_AbstractBooklet.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Increasing parallactic change compresses depth and perceived distance.\n \n \n \n \n\n\n \n Teng, X., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Proccedings of the Scottish Vision Group Meeting. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"IncreasingPaper\n  \n \n \n \"Increasing-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teng:2023aa,\n\tabstract = {Motion parallax provides information for both absolute distance and relative depth judgments. For a given head motion and given depth interval, the parallactic change is inversely proportional to the square of egocentric distance. In this presentation we will discuss analysis of a subset of data from a larger study. On each trial, monocularly-viewing observers made left-right swaying head motions at 1.0 Hz to induce the corresponding virtual motion shown on a head mounted display. A gain distortion was applied to the virtual motion, ranging from half to twice the physical motion. While moving, observers adjusted the angle of a vertical fold stimulus presented at distances from 1.3 to 6.0 m so it appeared to be at 90 deg. After the adjustment was made another virtual environment was presented. While standing stationary observers matched a pole to the apparent distance of the peak of the previously seen fold. On average observers adjusted the folds to have smaller depth as gain increased or distance decreased. Estimates of target distance also declined with increasing gain. As both distance and gain affect the amount of parallactic change we analysed to what extent our results could be explained by this variable alone. Our analysis confirmed that both of these measures varied consistently with parallactic change. We will discuss the implications of these findings for depth cue scaling, and for anticipated tolerance to tracking errors in virtual reality systems. },\n\tauthor = {Teng, X. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Proccedings of the Scottish Vision Group Meeting},\n\tdate-added = {2023-08-30 10:56:03 -0400},\n\tdate-modified = {2023-08-30 10:57:49 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Increasing parallactic change compresses depth and perceived distance},\n\turl = {https://psyresearch.abertay.ac.uk/SVG2023/Abstracts.htm},\n\tyear = {2023},\n\turl-1 = {https://psyresearch.abertay.ac.uk/SVG2023/Abstracts.htm}}\n\n
\n
\n\n\n
\n Motion parallax provides information for both absolute distance and relative depth judgments. For a given head motion and given depth interval, the parallactic change is inversely proportional to the square of egocentric distance. In this presentation we will discuss analysis of a subset of data from a larger study. On each trial, monocularly-viewing observers made left-right swaying head motions at 1.0 Hz to induce the corresponding virtual motion shown on a head mounted display. A gain distortion was applied to the virtual motion, ranging from half to twice the physical motion. While moving, observers adjusted the angle of a vertical fold stimulus presented at distances from 1.3 to 6.0 m so it appeared to be at 90 deg. After the adjustment was made another virtual environment was presented. While standing stationary observers matched a pole to the apparent distance of the peak of the previously seen fold. On average observers adjusted the folds to have smaller depth as gain increased or distance decreased. Estimates of target distance also declined with increasing gain. As both distance and gain affect the amount of parallactic change we analysed to what extent our results could be explained by this variable alone. Our analysis confirmed that both of these measures varied consistently with parallactic change. We will discuss the implications of these findings for depth cue scaling, and for anticipated tolerance to tracking errors in virtual reality systems. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vection does not facilitate flow parsing.\n \n \n \n \n\n\n \n Guo, H., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 23, pages 4721. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Vection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guo:2023aa,\n\tabstract = {The perception of self-motion can be induced or enhanced by exposure to visual stimuli such as optic flow. It has also been shown that consistent stereoscopic information enhances visually-induced self-motion perception (vection). Conversely, does vection affect the observer's ability to parse the flow? And if so, how does it interact with binocular stereopsis? To investigate, we presented participants a scene including a target, a fixation cross, floor, ceiling, and pillars to provide optic flow using a wide-field, stereoscopic, immersive display. Participants virtually moved forward or backward at 1.4 m/s, either while continuously viewing the scene to produce vection or when it was only displayed during the 500 ms trial (the no-vection condition). The target was presented initially at eye level, and moved obliquely upward in a sagittal-parallel plane. The target's velocity in depth was adjusted by adaptive staircases to obtain the bias and sensitivity. The task was to indicate whether the target moved obliquely forward or backward in the scene. The stimuli were presented in three viewing conditions: stereoscopic condition, synoptic condition, and monocular condition, to explore the possible interaction between vection and stereoscopic information. While all participants verbally reported that they experienced more vection with the vection condition, the result showed that the bias was slightly but significantly (F(1,127)=5.0217, p<.027) higher with vection (1.279 m/s) than without vection (1.219 m/s). This means vection did not help on reducing the flow parsing bias. Furthermore, we did not find any significant interaction effect between vection conditions and viewing conditions.},\n\tauthor = {Guo, H. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2023-08-30 10:46:37 -0400},\n\tdate-modified = {2023-08-30 10:48:05 -0400},\n\tdoi = {10.1167/jov.23.9.4721},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {4721},\n\ttitle = {Vection does not facilitate flow parsing},\n\tvolume = {23},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3575}}\n\n
\n
\n\n\n
\n The perception of self-motion can be induced or enhanced by exposure to visual stimuli such as optic flow. It has also been shown that consistent stereoscopic information enhances visually-induced self-motion perception (vection). Conversely, does vection affect the observer's ability to parse the flow? And if so, how does it interact with binocular stereopsis? To investigate, we presented participants a scene including a target, a fixation cross, floor, ceiling, and pillars to provide optic flow using a wide-field, stereoscopic, immersive display. Participants virtually moved forward or backward at 1.4 m/s, either while continuously viewing the scene to produce vection or when it was only displayed during the 500 ms trial (the no-vection condition). The target was presented initially at eye level, and moved obliquely upward in a sagittal-parallel plane. The target's velocity in depth was adjusted by adaptive staircases to obtain the bias and sensitivity. The task was to indicate whether the target moved obliquely forward or backward in the scene. The stimuli were presented in three viewing conditions: stereoscopic condition, synoptic condition, and monocular condition, to explore the possible interaction between vection and stereoscopic information. While all participants verbally reported that they experienced more vection with the vection condition, the result showed that the bias was slightly but significantly (F(1,127)=5.0217, p<.027) higher with vection (1.279 m/s) than without vection (1.219 m/s). This means vection did not help on reducing the flow parsing bias. Furthermore, we did not find any significant interaction effect between vection conditions and viewing conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Conflicting ordinal depth information interferes with visually-guided reaching.\n \n \n \n \n\n\n \n Au, D., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In volume 23, pages 5154. Journal of Vision (VSS Abstracts), 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Conflicting-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Au:ab,\n\tabstract = {Normally we integrate ordinal (occlusion) and metric (e.g., binocular disparity) depth information to obtain a unified percept of 3D layout. Further, quantitative depth must be available to the proprioceptive and motor systems to support interaction with nearby objects. Here we take a step towards understanding how occlusion and binocular disparity combine in the control of visually-guided reaching. We developed a novel conflict paradigm set in a real-world environment in which participants placed a virtual ring around a post positioned at one of several distances (34, 41.5, and 49 cm). The ring was fixed to the index fingertip (with lateral and vertical offsets to avoid finger collisions with the post). If the ring collided with the post, the ring changed colour and the trial restarted. We assessed performance with monocular and binocular viewing using both virtual and physical posts (N=10). The consistency of the occlusion was manipulated such that when the post was physical it never occluded the ring, even when correctly positioned around the post. This resulted in conflicting disparity and occlusion information between the post and further portion of the ring. Conversely, in virtual post conditions, occlusion of the ring by the post was consistent. We found that ring placement was less precise when occlusion and disparity information were inconsistent. Participants also required more attempts to complete the task under the conflict compared to consistent conditions. While this pattern of results was similar for binocular and monocular viewing, observers performed worse and required more attempts when doing the task with one eye. Our results underscore the importance of binocular depth information in performing visuo-motor tasks. However, even when such precise quantitative depth information is available, ordinal depth cues can significantly impact both perception and action, despite these latter cues only providing binary signals to the success of visually-guided action.},\n\tauthor = {Au, D. and Allison, R. S. and Wilcox, L. M.},\n\tdate-added = {2023-08-30 10:45:14 -0400},\n\tdate-modified = {2023-08-30 10:54:28 -0400},\n\tdoi = {10.1167/jov.23.9.5154},\n\tkeywords = {Stereopsis},\n\tpages = {5154},\n\tpublisher = {Journal of Vision (VSS Abstracts)},\n\ttitle = {Conflicting ordinal depth information interferes with visually-guided reaching},\n\tvolume = {23},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1167/jov.23.9.5154}}\n\n
\n
\n\n\n
\n Normally we integrate ordinal (occlusion) and metric (e.g., binocular disparity) depth information to obtain a unified percept of 3D layout. Further, quantitative depth must be available to the proprioceptive and motor systems to support interaction with nearby objects. Here we take a step towards understanding how occlusion and binocular disparity combine in the control of visually-guided reaching. We developed a novel conflict paradigm set in a real-world environment in which participants placed a virtual ring around a post positioned at one of several distances (34, 41.5, and 49 cm). The ring was fixed to the index fingertip (with lateral and vertical offsets to avoid finger collisions with the post). If the ring collided with the post, the ring changed colour and the trial restarted. We assessed performance with monocular and binocular viewing using both virtual and physical posts (N=10). The consistency of the occlusion was manipulated such that when the post was physical it never occluded the ring, even when correctly positioned around the post. This resulted in conflicting disparity and occlusion information between the post and further portion of the ring. Conversely, in virtual post conditions, occlusion of the ring by the post was consistent. We found that ring placement was less precise when occlusion and disparity information were inconsistent. Participants also required more attempts to complete the task under the conflict compared to consistent conditions. While this pattern of results was similar for binocular and monocular viewing, observers performed worse and required more attempts when doing the task with one eye. Our results underscore the importance of binocular depth information in performing visuo-motor tasks. However, even when such precise quantitative depth information is available, ordinal depth cues can significantly impact both perception and action, despite these latter cues only providing binary signals to the success of visually-guided action.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Increasing motion parallax gain compresses space and 3D object shape.\n \n \n \n \n\n\n \n Teng, X., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision, VSS Abstracts, volume 23, pages 5015. VSS, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Increasing-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teng:aa,\n\tabstract = {When moving about the world, humans rely on visual, proprioceptive and vestibular cues to perceive depth and distance. Normally, these sources of information are consistent. However, what happens if we receive conflicting information about how far we have moved? A previous study reported that at distances of 1.3 to 1.5 m, portrayed binocular 3D shape was not affected by motion gain; however, apparent distance and monocular depth settings were influenced. In our study, we extended the range of distances to 1.5 to 6 m. A VR headset was used to display gain distortions binocularly and monocularly to one eye. Observers swayed from side to side through 20 cm at 0.5 Hz to the beat of a metronome. The simulated virtual motion was varied by a gain of 0.5 to 2.0 times the physical motion. Observers first adjusted a vertical fold until its sides appeared to form a 90-degree angle. The fold then disappeared and they indicated its remembered distance by adjusting the position of a virtual pole. In the monocular condition as gain increased, observers provided increasingly compressed fold depth settings at 1.5 and 3 but not at 6 m. Under binocular viewing, increasing gain compressed distance but not object shape settings. To ensure that the weak binocular effects were not due to failure to perceive the gain, we separately assessed gain discrimination thresholds using the fold stimulus. We found that observers were sensitive to the manipulation over this range and tended to perceive a gain of 1.1 as having no motion distortion under both viewing conditions. It is clear from our data that monocular viewing of kinesthetic/visual mismatch results in significant variations in portrayed depth of the fold. These effects can be somewhat mitigated by increasing viewing distance, but even more so by viewing with both eyes.},\n\tauthor = {Teng, X. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision, VSS Abstracts},\n\tdate-added = {2023-08-30 10:45:14 -0400},\n\tdate-modified = {2023-08-30 10:49:59 -0400},\n\tdoi = {10.1167/jov.23.9.5015},\n\tkeywords = {Stereopsis},\n\tpages = {5015},\n\tpublisher = {VSS},\n\ttitle = {Increasing motion parallax gain compresses space and 3D object shape},\n\tvolume = {23},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1167/jov.23.9.5015}}\n\n
\n
\n\n\n
\n When moving about the world, humans rely on visual, proprioceptive and vestibular cues to perceive depth and distance. Normally, these sources of information are consistent. However, what happens if we receive conflicting information about how far we have moved? A previous study reported that at distances of 1.3 to 1.5 m, portrayed binocular 3D shape was not affected by motion gain; however, apparent distance and monocular depth settings were influenced. In our study, we extended the range of distances to 1.5 to 6 m. A VR headset was used to display gain distortions binocularly and monocularly to one eye. Observers swayed from side to side through 20 cm at 0.5 Hz to the beat of a metronome. The simulated virtual motion was varied by a gain of 0.5 to 2.0 times the physical motion. Observers first adjusted a vertical fold until its sides appeared to form a 90-degree angle. The fold then disappeared and they indicated its remembered distance by adjusting the position of a virtual pole. In the monocular condition as gain increased, observers provided increasingly compressed fold depth settings at 1.5 and 3 but not at 6 m. Under binocular viewing, increasing gain compressed distance but not object shape settings. To ensure that the weak binocular effects were not due to failure to perceive the gain, we separately assessed gain discrimination thresholds using the fold stimulus. We found that observers were sensitive to the manipulation over this range and tended to perceive a gain of 1.1 as having no motion distortion under both viewing conditions. It is clear from our data that monocular viewing of kinesthetic/visual mismatch results in significant variations in portrayed depth of the fold. These effects can be somewhat mitigated by increasing viewing distance, but even more so by viewing with both eyes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Illusion of Tilt: Does Your Sex Define Your Perception of Upright?.\n \n \n \n\n\n \n Bury, N., Harris, L. R., Jenkin, M., Allison, R. S., Frett, T., Felsner, S., Schellen, E., & Herpers, R.\n\n\n \n\n\n\n In International Multisensory Research Forum, pages 171. 2023.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bury:2023mb,\n\tauthor = {Nils-Alexander Bury and Laurence R. Harris and Michael Jenkin and Robert S. Allison and Timo Frett and Sandra Felsner and Elef Schellen and Rainer Herpers},\n\tbooktitle = {International Multisensory Research Forum},\n\tdate-added = {2023-08-13 09:23:14 -0400},\n\tdate-modified = {2023-08-13 09:23:14 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {171},\n\ttitle = {The Illusion of Tilt: Does Your Sex Define Your Perception of Upright?},\n\tyear = {2023}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The effect of postural orientation around the pitch axis on the haptic perception of vertical.\n \n \n \n\n\n \n Schellen, E, Ark, E, Jenkin, M, Allison, R., Bury, N, Herpers, R, & LR., H.\n\n\n \n\n\n\n In International Multisensory Research Forum, pages 093. 2023.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Schellen:2023so,\n\tannote = {June 27-30/ 2023 in Brussels},\n\tauthor = {Schellen, E and Ark, E and Jenkin, M and Allison, R.S. and Bury, N and Herpers, R and Harris LR.},\n\tbooktitle = {International Multisensory Research Forum},\n\tdate-added = {2023-08-13 09:22:27 -0400},\n\tdate-modified = {2023-08-13 09:22:27 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {093},\n\ttitle = {The effect of postural orientation around the pitch axis on the haptic perception of vertical},\n\tyear = {2023}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Precision and Bias in the Perception of Object Size in Microgravity.\n \n \n \n\n\n \n Jorges, B, Bury, N., McManus, M, , B., Allison, R. S., Jenkin, M. R. M., & Harris, L. R.\n\n\n \n\n\n\n In International Multisensory Research Forum, pages 066. 2023.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jorges:yg,\n\tabstract = {Gravity influences the perception of size although the mechanism remains unclear. Some authors have suggested that gravity might serve as a reference frame for visual judgements. If so, then in the absence of this persistent frame of reference size judgements should be less precise in microgravity. Twelve astronauts (6 women and 6 men) were tested before space flight, within 6 days of arrival on the ISS, approximately 90 days after arrival, within 6 days of return to Earth, and more than 60 days after return. They judged the height of a visually fronto-parallel square presented in VR at 6, 12 and 18 m relative to a bar held in their hands aligned with the long axis of the body. The cube's height was varied trial to trial via an adaptive staircase. We found no significant differences in precision or bias between any of the space sessions and before they flew. However, when collapsing across test sessions, astronauts perceived the cube to be significantly larger in space than when upright (p = 0.01) or supine (p = 0.017) on Earth which was mainly driven by the cube being perceived as smaller (p = 0.002) after having been back on Earth for 60 days compared to their first session. The lack of effect of microgravity on precision makes it unlikely that the gravity-as-reference-frame hypothesis can explain posture-related perceptual size changes observed on Earth. However, space exposure does seem to create lasting changes in perceptual processing.},\n\tannote = {June 27-30/ 2023 in Brussels},\n\tauthor = {Jorges, B and Bury, N. and McManus, M and , Bansal, A. and Allison, R. S. and Jenkin, M. R. M. and Harris, L. R.},\n\tbooktitle = {International Multisensory Research Forum},\n\tdate-added = {2023-08-13 09:21:09 -0400},\n\tdate-modified = {2023-08-13 09:21:09 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {066},\n\ttitle = {Precision and Bias in the Perception of Object Size in Microgravity},\n\tyear = {2023}}\n\n
\n
\n\n\n
\n Gravity influences the perception of size although the mechanism remains unclear. Some authors have suggested that gravity might serve as a reference frame for visual judgements. If so, then in the absence of this persistent frame of reference size judgements should be less precise in microgravity. Twelve astronauts (6 women and 6 men) were tested before space flight, within 6 days of arrival on the ISS, approximately 90 days after arrival, within 6 days of return to Earth, and more than 60 days after return. They judged the height of a visually fronto-parallel square presented in VR at 6, 12 and 18 m relative to a bar held in their hands aligned with the long axis of the body. The cube's height was varied trial to trial via an adaptive staircase. We found no significant differences in precision or bias between any of the space sessions and before they flew. However, when collapsing across test sessions, astronauts perceived the cube to be significantly larger in space than when upright (p = 0.01) or supine (p = 0.017) on Earth which was mainly driven by the cube being perceived as smaller (p = 0.002) after having been back on Earth for 60 days compared to their first session. The lack of effect of microgravity on precision makes it unlikely that the gravity-as-reference-frame hypothesis can explain posture-related perceptual size changes observed on Earth. However, space exposure does seem to create lasting changes in perceptual processing.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Quantifying display lag and its effects during Head-Mounted Display based Virtual Reality.\n \n \n \n \n\n\n \n Wagner, P., Kim, J., Allison, R. S., & Palmisano3, S.\n\n\n \n\n\n\n In SA '23: SIGGRAPH Asia 2023 Posters, volume Article 29, pages 1–3, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Quantifying-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Wagner:aa,\n\tauthor = {Peter Wagner and Juno Kim and Robert S. Allison and Stephen Palmisano3},\n\tbooktitle = {SA '23: SIGGRAPH Asia 2023 Posters},\n\tdate-added = {2023-11-29 10:55:13 -0500},\n\tdate-modified = {2023-11-29 10:55:13 -0500},\n\tdoi = {10.1145/3610542.3626139},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {1--3},\n\ttitle = {Quantifying display lag and its effects during Head-Mounted Display based Virtual Reality},\n\tvolume = {Article 29},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1145/3610542.3626139}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recreating theWater-Level Task in Augmented Reality.\n \n \n \n \n\n\n \n Abadi, R., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In ICMI '23: 25th ACM International Conference on Multimodal Interaction, pages 622–630, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Recreating-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Abadi:it,\n\tannote = {Paris 9-13 October 2023)},\n\tauthor = {Abadi, R. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {ICMI '23: 25th ACM International Conference on Multimodal Interaction},\n\tdate-added = {2023-08-02 13:15:01 -0400},\n\tdate-modified = {2023-10-07 20:24:54 -0400},\n\tdoi = {10.1145/3577190.3614107},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {622--630},\n\ttitle = {Recreating theWater-Level Task in Augmented Reality},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1145/3577190.3614107}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling the relationship between the objective measures of car sickness.\n \n \n \n \n\n\n \n Shodipe, O. E., & Allison, R. S.\n\n\n \n\n\n\n In 2023 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pages 570-575, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Modelling-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Shodipe:xe,\n\tannote = {Regina, SK, Canada},\n\tauthor = {Oluwaseyi Elizabeth Shodipe and Robert S. Allison},\n\tbooktitle = {2023 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)},\n\tdate-added = {2023-08-02 13:13:46 -0400},\n\tdate-modified = {2024-01-21 10:45:28 -0500},\n\tdoi = {10.1109/CCECE58730.2023.1028900},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {570-575},\n\ttitle = {Modelling the relationship between the objective measures of car sickness},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1109/CCECE58730.2023.1028900}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Subjective Quality of Stereoscopic 3D Video Following Display Stream Compression.\n \n \n \n \n\n\n \n Mohona, S., Au, D., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In IEEE Workshop on Multimedia Signal Processing, pages 1-6, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Mohona:wq,\n\tauthor = {Mohona, S.S. and Au, D. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {IEEE Workshop on Multimedia Signal Processing},\n\tdate-added = {2023-08-02 13:13:06 -0400},\n\tdate-modified = {2024-01-21 10:43:09 -0500},\n\tdoi = {10.1109/MMSP59012.2023.10337720},\n\tkeywords = {Image Quality},\n\tpages = {1-6},\n\ttitle = {The Subjective Quality of Stereoscopic 3D Video Following Display Stream Compression},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1109/MMSP59012.2023.10337720}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploring the Impact of Immersion on Situational Awareness and Trust in Remotely Monitored Maritime Autonomous Surface Ships.\n \n \n \n \n\n\n \n Gregor, A., Allison, R. S., & Heffner, K.\n\n\n \n\n\n\n In IEEE OCEANS Conference, pages 1-10, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Exploring-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Gregor:2023pz,\n\tabstract = {Abstract---Consistent with the International Maritime Organisation's\nroadmap for regulating the operation of autonomous\nsurface ships, most concepts of operations for crewed and\nuncrewed autonomous shipping rely on monitoring and operation\nfrom a Remote Control Centre (RCC). The successful\nexecution of such activities requires that operators have adequate\nSituational Awareness (SA), while avoiding situations of\ninformation overload, and the right amount of, or calibrated,\nTrust in the system. In this study, we examined how operator\nSA and Trust were affected by different levels of Immersion of\nthe human-machine interface. Simulated RCC interfaces were\nconstructed for a scenario where an autonomous container ship\ntraversed the arctic escorted by robotic aids. SA, Trust, and\nMotion Sickness (MS) were tracked over time. Different Virtual\nReality (VR) technologies were used to represent three levels\nof Immersion: Non-Immersive VR (NVR), Semi-Immersive VR\n(SVR), and Immersive VR (IVR). The results illustrated various\ntrade-offs -- with NVR shown to be less taxing, SVR showing\nseveral potential benefits for SA, and IVR showing a strong\nrelationship between Trust and SA accuracy, but increased MS.\nThese results suggest that Immersion is an important factor in\nSituational Awareness and Trust in automation; future research\nshould consider both the extent of Immersion, potential for MS,\nand the format of delivery (e.g. head-mounted displays versus\nimmersive projection displays). Understanding these trade-offs\nbetween levels of Immersion is a requisite step for designing\nRCCs.},\n\tannote = {Limerick, 5-8 June 2023},\n\tauthor = {Gregor, A. and Allison, R. S. and Heffner, K.},\n\tbooktitle = {IEEE OCEANS Conference},\n\tdate-added = {2023-05-18 08:34:24 -0400},\n\tdate-modified = {2023-10-14 14:56:29 -0400},\n\tdoi = {10.1109/OCEANSLimerick52467.2023.10244249},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {1-10},\n\ttitle = {Exploring the Impact of Immersion on Situational Awareness and Trust in Remotely Monitored Maritime Autonomous Surface Ships},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1109/OCEANSLimerick52467.2023.10244249}}\n\n
\n
\n\n\n
\n Abstract—Consistent with the International Maritime Organisation's roadmap for regulating the operation of autonomous surface ships, most concepts of operations for crewed and uncrewed autonomous shipping rely on monitoring and operation from a Remote Control Centre (RCC). The successful execution of such activities requires that operators have adequate Situational Awareness (SA), while avoiding situations of information overload, and the right amount of, or calibrated, Trust in the system. In this study, we examined how operator SA and Trust were affected by different levels of Immersion of the human-machine interface. Simulated RCC interfaces were constructed for a scenario where an autonomous container ship traversed the arctic escorted by robotic aids. SA, Trust, and Motion Sickness (MS) were tracked over time. Different Virtual Reality (VR) technologies were used to represent three levels of Immersion: Non-Immersive VR (NVR), Semi-Immersive VR (SVR), and Immersive VR (IVR). The results illustrated various trade-offs – with NVR shown to be less taxing, SVR showing several potential benefits for SA, and IVR showing a strong relationship between Trust and SA accuracy, but increased MS. These results suggest that Immersion is an important factor in Situational Awareness and Trust in automation; future research should consider both the extent of Immersion, potential for MS, and the format of delivery (e.g. head-mounted displays versus immersive projection displays). Understanding these trade-offs between levels of Immersion is a requisite step for designing RCCs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality.\n \n \n \n \n\n\n \n Teng, X., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Proceedings IEEE Virtual Reality 2023, pages 398-408, 2023. IEEE VR\n \n\n\n\n
\n\n\n\n \n \n \"Manipulation-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Teng:2023uq,\n\tabstract = {Virtual reality (VR) is distinguished by the rich, multimodal, immersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the visual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). \nGiven that conflicts between sensory cues have been associated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience.\n\nTo this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the perception of the depth of an apparently perpendicular dihedral \\textcolor{\\hlt}{angle (a fold)} and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. \nWe found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m.\nThe effect of gain was minimal when observers viewed the stimulus binocularly. \nFurther, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. \nOverall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. \nTaken together, our results show that the use of large presentation distances (at $6$ m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.},\n\tannote = {Shanghai Mar 2023},\n\tauthor = {Teng, X. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Proceedings IEEE Virtual Reality 2023},\n\tdate-added = {2023-05-18 08:34:24 -0400},\n\tdate-modified = {2023-05-18 08:35:23 -0400},\n\tdoi = {10.1109/VR55154.2023.00055},\n\tkeywords = {Stereopsis},\n\tpages = {398-408},\n\tpublisher = {IEEE VR},\n\ttitle = {Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality},\n\tyear = {2023},\n\turl-1 = {https://doi.org/10.1109/VR55154.2023.00055}}\n\n
\n
\n\n\n
\n Virtual reality (VR) is distinguished by the rich, multimodal, immersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the visual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been associated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience. To this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the perception of the depth of an apparently perpendicular dihedral \\textcolor\\hltangle (a fold) and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. We found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m. The effect of gain was minimal when observers viewed the stimulus binocularly. Further, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. Overall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. Taken together, our results show that the use of large presentation distances (at $6$ m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The impacts of lens and stereo camera separation on perceived slant in Virtual Reality head-mounted displays.\n \n \n \n \n\n\n \n Tong, J., Wilcox, L. M, & Allison, R. S\n\n\n \n\n\n\n IEEE Transactions on Visualization and Computer Graphics, 28(11): 3759-3766. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{tong2022modeling,\n\tauthor = {Tong, Jonathan and Wilcox, Laurie M and Allison, Robert S},\n\tdate-added = {2022-07-06 11:47:53 -0400},\n\tdate-modified = {2022-11-28 10:00:05 -0500},\n\tdoi = {10.1109/TVCG.2022.3203098},\n\tjournal = {IEEE Transactions on Visualization and Computer Graphics},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {11},\n\tpages = {3759-3766},\n\ttitle = {The impacts of lens and stereo camera separation on perceived slant in Virtual Reality head-mounted displays},\n\tvolume = {28},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1109/TVCG.2022.3203098}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shape judgements in natural scenes: Convexity biases vs. stereopsis.\n \n \n \n \n\n\n \n Hartle, B., Sudhama-Joseph, A., Irving, E. L., Allison, R. S., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n Journal of Vision, 22(8): 6.1-6.13. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Shape-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Hartle:wl,\n\tauthor = {Hartle, B. and Sudhama-Joseph, A. and Irving, E. L. and Allison, R. S. and Glaholt, M. and Wilcox, L. M.},\n\tdate-added = {2022-06-03 14:50:46 -0400},\n\tdate-modified = {2022-12-20 09:37:54 -0500},\n\tdoi = {10.1167/jov.22.8.6},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {8},\n\tpages = {6.1-6.13},\n\ttitle = {Shape judgements in natural scenes: Convexity biases vs. stereopsis},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.8.6}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (21)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Stereoscopic Distortions When Viewing Geometry Does Not Match Inter-Pupillary Distance.\n \n \n \n \n\n\n \n Tong, J., Wilcox, L. M., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 92. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2022qt,\n\tauthor = {Tong, J. and Wilcox, L. M. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:34:45 -0400},\n\tdate-modified = {2023-10-14 15:35:16 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {92},\n\ttitle = {Stereoscopic Distortions When Viewing Geometry Does Not Match Inter-Pupillary Distance},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular Depth And Distance Cues Enhance Tolerance To Virtual Motion Gain.\n \n \n \n \n\n\n \n Teng, X., Wilcox, L. M., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 91. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teng:2022nq,\n\tauthor = {Teng, X. and Wilcox, L. M. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:33:50 -0400},\n\tdate-modified = {2023-10-14 15:34:21 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {91},\n\ttitle = {Binocular Depth And Distance Cues Enhance Tolerance To Virtual Motion Gain},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Objective And Subjective Impact Of Chromatic Aberration Compensation On Compression Artifacts.\n \n \n \n \n\n\n \n Mohona, S., Au, D., Wilcox, L. M., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 77. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Objective-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Mohona:2022dd,\n\tauthor = {Mohona, S.S. and Au, D. and Wilcox, L. M. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:32:45 -0400},\n\tdate-modified = {2023-10-14 15:33:37 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {77},\n\ttitle = {Objective And Subjective Impact Of Chromatic Aberration Compensation On Compression Artifacts},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simulated Motion In Virtual Environments Affects Cognitive Task Performance.\n \n \n \n \n\n\n \n Kio, O. G., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 68. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Simulated-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kio:2022lp,\n\tauthor = {Kio, O. G. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:31:49 -0400},\n\tdate-modified = {2023-10-14 15:32:25 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {68},\n\ttitle = {Simulated Motion In Virtual Environments Affects Cognitive Task Performance},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detectability Of Image Transformations During Eye And Head Movements.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 66. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Detectability-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Keyvanara:2022xz,\n\tauthor = {Keyvanara, M. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:30:16 -0400},\n\tdate-modified = {2023-10-14 15:30:55 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {66},\n\ttitle = {Detectability Of Image Transformations During Eye And Head Movements},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect Of Binocular Disparity On Flow Parsing.\n \n \n \n \n\n\n \n Guo, H., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 62. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guo:2022px,\n\tauthor = {Guo, H. and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:29:03 -0400},\n\tdate-modified = {2023-10-14 15:31:04 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {62},\n\ttitle = {Effect Of Binocular Disparity On Flow Parsing},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploring The Impact Of Immersion On Situational Awareness And Trust In Teleoperated Maritime Autonomous Surface Ship.\n \n \n \n \n\n\n \n Gregor, A., Allison, R. S, Kio, O. G., & Heffner, K.\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 61. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Exploring-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Gregor:2022wu,\n\tauthor = {Gregor, A. and Allison, Robert S and Kio, O. G. and Heffner, K.},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:27:54 -0400},\n\tdate-modified = {2023-10-14 15:28:47 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {61},\n\ttitle = {Exploring The Impact Of Immersion On Situational Awareness And Trust In Teleoperated Maritime Autonomous Surface Ship},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Incompatible Occlusion And Binocular Disparity Cause Systematic Localization Errors In Augmented Reality.\n \n \n \n \n\n\n \n Au, D., Tong, J., Allison, R. S, & Wilcox, L. M.\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 50. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Incompatible-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Au:2022ye,\n\tauthor = {Au, D. and Tong, J. and Allison, Robert S and Wilcox, L. M.},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:26:02 -0400},\n\tdate-modified = {2023-10-14 15:27:06 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {50},\n\ttitle = {Incompatible Occlusion And Binocular Disparity Cause Systematic Localization Errors In Augmented Reality},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Implementing The Water Level Task In Augmented Reality.\n \n \n \n \n\n\n \n Abadi, R., & Allison, R. S\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 47. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Implementing-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Abadi:2022qo,\n\tauthor = {Abadi, R.l and Allison, Robert S},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2023-10-14 15:24:36 -0400},\n\tdate-modified = {2023-10-14 15:57:28 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {47},\n\ttitle = {Implementing The Water Level Task In Augmented Reality},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Influence of Gravity on Perceived Travel Distance in Virtual Reality.\n \n \n \n \n\n\n \n Bury, N., Harris, L. R., Jenkin, M. R. M., Allison, R. S., Felsner, S., & Herpers, R.\n\n\n \n\n\n\n In TeaP 2022 (64th Tagung experimentell arbeitender Psychologinnen; Conference of Experimental Psychologists), pages 241-242. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bury:2022ij,\n\tannote = {TeaP 2022 (Tagung experimentell arbeitender Psycholog:innen; Conference of Experimental Psychologists) will take place in Cologne from 20-23 of March 2022},\n\tauthor = {Bury, N. and Harris, L. R. and Jenkin, M. R. M. and Allison, R. S. and Felsner, S. and Herpers, R.},\n\tbooktitle = {TeaP 2022 (64th Tagung experimentell arbeitender Psychologinnen; Conference of Experimental Psychologists)},\n\tdate-added = {2023-08-30 11:04:00 -0400},\n\tdate-modified = {2023-08-30 11:04:00 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {241-242},\n\ttitle = {The Influence of Gravity on Perceived Travel Distance in Virtual Reality},\n\turl = {https://teap2022.uni-koeln.de/sites/teap2022/user_upload/TeaP2022_AbstractBooklet.pdf},\n\tyear = {2022},\n\turl-1 = {https://teap2022.uni-koeln.de/sites/teap2022/user_upload/TeaP2022_AbstractBooklet.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular cues to depth and distance enhance tolerance to visual and kinesthetic mismatch.\n \n \n \n \n\n\n \n Teng, X., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3312. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teng:2022hh,\n\tabstract = {In natural environments, motion parallax (from visual direction and optic flow) supports both depth and distance perception. What happens if we do not know how far we have moved or receive conflicting information? We manipulated motion gain using a VR headset and a two-phase task to assess perceived depth and distance. Observers first viewed a ``fold'' stimulus, a wall-oriented dihedral angle covered in Voronoi texture. The task was to adjust the dihedral angle until it appeared to be 90 degrees (perpendicular). We occluded the top and bottom edges of the fold and varied the width to make the edges of the fold uninformative. On each trial, following the angle adjustment, a second scene appeared which contained a pole that extended from a ground plane. In this phase, the task was to match the position of the pole to the remembered position of the apex of the previously seen fold. We tested observers binocularly and monocularly in two motion conditions (stationary and moving). When moving, observers swayed laterally through 20 cm in time to a 0.5 Hz metronome; the motion gain varied from 0.5 to 2.0 times the actual self-motion. We found that increased gain caused an increase in the adjusted angle or equivalently a decrease in associated depth of the fold, especially when viewed monocularly. In addition, perceived distance decreased with increasing gain, irrespective of viewing condition. That is, the fold was perceived as smaller and closer when gain was larger than 1. The effect of the gain manipulation was much weaker under binocular viewing. These data show that perceptual distortions due to differences between actual and virtual head motion are compensated for by binocular, and to a lesser extent monocular, depth and distance cues. These flexible compensatory mechanisms make the human visual system highly tolerant of visual/kinesthetic mismatch.},\n\tauthor = {Teng, X. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:25:24 -0500},\n\tdoi = {10.1167/jov.22.14.3312},\n\tkeywords = {Stereopsis},\n\tpages = {3312},\n\ttitle = {Binocular cues to depth and distance enhance tolerance to visual and kinesthetic mismatch},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3312}}\n\n
\n
\n\n\n
\n In natural environments, motion parallax (from visual direction and optic flow) supports both depth and distance perception. What happens if we do not know how far we have moved or receive conflicting information? We manipulated motion gain using a VR headset and a two-phase task to assess perceived depth and distance. Observers first viewed a ``fold'' stimulus, a wall-oriented dihedral angle covered in Voronoi texture. The task was to adjust the dihedral angle until it appeared to be 90 degrees (perpendicular). We occluded the top and bottom edges of the fold and varied the width to make the edges of the fold uninformative. On each trial, following the angle adjustment, a second scene appeared which contained a pole that extended from a ground plane. In this phase, the task was to match the position of the pole to the remembered position of the apex of the previously seen fold. We tested observers binocularly and monocularly in two motion conditions (stationary and moving). When moving, observers swayed laterally through 20 cm in time to a 0.5 Hz metronome; the motion gain varied from 0.5 to 2.0 times the actual self-motion. We found that increased gain caused an increase in the adjusted angle or equivalently a decrease in associated depth of the fold, especially when viewed monocularly. In addition, perceived distance decreased with increasing gain, irrespective of viewing condition. That is, the fold was perceived as smaller and closer when gain was larger than 1. The effect of the gain manipulation was much weaker under binocular viewing. These data show that perceptual distortions due to differences between actual and virtual head motion are compensated for by binocular, and to a lesser extent monocular, depth and distance cues. These flexible compensatory mechanisms make the human visual system highly tolerant of visual/kinesthetic mismatch.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic distortions when viewing geometry does not match inter-pupillary distance.\n \n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3564. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2022ca,\n\tabstract = {The relationship between depth and binocular cues (disparity and convergence) is defined by the distance separating the two eyes, also known as the inter-pupillary distance (IPD). This relationship is mapped in the visual system through experience and feedback, and adaptively recalibrated as IPD gradually increases during development. However, with the advent of stereoscopic-3D displays, situations may arise in which the visual system views content that is captured or rendered with a camera separation that differs from the viewer's own IPD; without feedback, this will likely result in a systematic and persistent misperception of depth. We tested this prediction using a VR headset in which the inter-axial separation of virtual cameras and the separation between the optics are coupled. Observers (n=15) were asked to adjust the angle between two intersecting textured-surfaces until it appeared to be 90$\\,^{\\circ}$, at each of three viewing distances. In the baseline condition the lens and camera separations matched each observer's IPD. In two `mismatch' conditions (tested in separate blocks) the lens and camera separations were set to the maximum (71 mm) and minimum (59 mm) allowed by the headset. We found that when the lens and camera separation were less than the viewer's IPD they exhibited compression of space; the adjusted angle was smaller than their baseline setting. The reverse pattern was seen when the lens and camera separation were larger than the viewer's IPD. Linear regression analysis supported these conclusions with a significant correlation between the magnitude of IPD mismatch and the deviation of angle adjustment relative to the baseline condition. We show that these results are well explained by a geometric model that considers the scaling of disparity and convergence due to shifts in virtual camera and optical inter-axial separations relative to an observer's IPD.},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:25:27 -0500},\n\tdoi = {10.1167/jov.22.14.3564},\n\tkeywords = {Stereopsis},\n\tpages = {3564},\n\ttitle = {Stereoscopic distortions when viewing geometry does not match inter-pupillary distance},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3564}}\n\n
\n
\n\n\n
\n The relationship between depth and binocular cues (disparity and convergence) is defined by the distance separating the two eyes, also known as the inter-pupillary distance (IPD). This relationship is mapped in the visual system through experience and feedback, and adaptively recalibrated as IPD gradually increases during development. However, with the advent of stereoscopic-3D displays, situations may arise in which the visual system views content that is captured or rendered with a camera separation that differs from the viewer's own IPD; without feedback, this will likely result in a systematic and persistent misperception of depth. We tested this prediction using a VR headset in which the inter-axial separation of virtual cameras and the separation between the optics are coupled. Observers (n=15) were asked to adjust the angle between two intersecting textured-surfaces until it appeared to be 90$\\,^{∘}$, at each of three viewing distances. In the baseline condition the lens and camera separations matched each observer's IPD. In two `mismatch' conditions (tested in separate blocks) the lens and camera separations were set to the maximum (71 mm) and minimum (59 mm) allowed by the headset. We found that when the lens and camera separation were less than the viewer's IPD they exhibited compression of space; the adjusted angle was smaller than their baseline setting. The reverse pattern was seen when the lens and camera separation were larger than the viewer's IPD. Linear regression analysis supported these conclusions with a significant correlation between the magnitude of IPD mismatch and the deviation of angle adjustment relative to the baseline condition. We show that these results are well explained by a geometric model that considers the scaling of disparity and convergence due to shifts in virtual camera and optical inter-axial separations relative to an observer's IPD.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of Binocular Disparity on Detecting Target Motion during Locomotion.\n \n \n \n \n\n\n \n Guo, H., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3575. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guo:2022qc,\n\tabstract = {During locomotion, optic flow provides important information for detection, estimation and navigation. On the other hand, binocular disparity, which carries compelling depth information, can potentially aid optic flow parsing. We explored the effect of binocular disparity on observers' ability to detect object motion during simulated locomotion. Twelve participants were recruited and tested on our wide-field stereoscopic environment (WISE). The stimulus consisted of four spherical targets hovering in a pillar hallway, and it was presented in stereoscopic, synoptic (binocular but without disparity), and monocular viewing conditions. In each trial, one of the four targets moved either in depth (approaching or receding) or a direction parallel to the frontal plane (contracting or expanding). Participants detected the moving target during a simulated forward walking locomotion in a 4-alternative forced choice task. The locomotion speed was 1.4 m/s, and therefore the target motion was superimposed upon this optic flow. Adaptive staircases were adopted to obtain the thresholds of the target motion speed in each viewing condition. The results to date showed that participants' thresholds in the stereoscopic condition were 20 - 40\\% lower (better) than those in the synoptic condition when detecting approaching targets, t(7) = 3.85, p = .006, receding targets, t(7) = 2.83,p = .025, and contracting targets, t(7) = 2.57, p = .036. Furthermore, only when detecting expanding targets, threshold performance was significantly better in the synoptic condition than that in the monocular condition, t(7) = 2.67, p = .032. These results suggested that during locomotion, binocular disparity facilitates optic flow parsing.},\n\tauthor = {Guo, H. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:31:28 -0500},\n\tdoi = {10.1167/jov.22.14.3575},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {3575},\n\ttitle = {Effect of Binocular Disparity on Detecting Target Motion during Locomotion},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3575}}\n\n
\n
\n\n\n
\n During locomotion, optic flow provides important information for detection, estimation and navigation. On the other hand, binocular disparity, which carries compelling depth information, can potentially aid optic flow parsing. We explored the effect of binocular disparity on observers' ability to detect object motion during simulated locomotion. Twelve participants were recruited and tested on our wide-field stereoscopic environment (WISE). The stimulus consisted of four spherical targets hovering in a pillar hallway, and it was presented in stereoscopic, synoptic (binocular but without disparity), and monocular viewing conditions. In each trial, one of the four targets moved either in depth (approaching or receding) or a direction parallel to the frontal plane (contracting or expanding). Participants detected the moving target during a simulated forward walking locomotion in a 4-alternative forced choice task. The locomotion speed was 1.4 m/s, and therefore the target motion was superimposed upon this optic flow. Adaptive staircases were adopted to obtain the thresholds of the target motion speed in each viewing condition. The results to date showed that participants' thresholds in the stereoscopic condition were 20 - 40% lower (better) than those in the synoptic condition when detecting approaching targets, t(7) = 3.85, p = .006, receding targets, t(7) = 2.83,p = .025, and contracting targets, t(7) = 2.57, p = .036. Furthermore, only when detecting expanding targets, threshold performance was significantly better in the synoptic condition than that in the monocular condition, t(7) = 2.67, p = .032. These results suggested that during locomotion, binocular disparity facilitates optic flow parsing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of simulated and perceived motion on cognitive task performance.\n \n \n \n \n\n\n \n Kio, G., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3627. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kio:2022bx,\n\tabstract = {Compelling simulated motion in virtual environments can induce the sensation of self motion (or vection) in stationary observers. While the usefulness and functional significance of vection is still debated, the literature has shown that perceived magnitude of vection is lower when observers perform attentionally demanding cognitive tasks than when attentional demands are absent. Could simulated motion and the resulting vection experienced in virtual environments in turn affect how observers perform various attention demanding tasks? In this study therefore, we investigated how accurately and rapidly observers could perform attention-demanding aural and visual tasks while experiencing levels of vection-inducing motion in a virtual environment. Seventeen adult observers were exposed to different levels of simulated motion at virtual camera speeds of 0 (stationary), 5, 10 and 15 m/s in a straight virtual corridor rendered through a Vive-Pro Virtual Reality headset. During these simulations, they performed aural or visual discrimination tasks, or no task at all. We recorded the accuracy, the time observers took to respond to each task, and the intensity of vection they reported. Repeated Measures ANOVA showed that levels of simulated motion did not significantly affect accuracy on either task (F(3,48) = 1.469, p = .235 aural; F(3,48) = 1.504, p = .226 visual), but significantly affected the response times on aural tasks (F(3,48) = 4.320, p = .009 aural; F(3,48) = 0.916, p = .440 visual). Observers generally perceived less vection at all levels of motion when they performed visual discrimination tasks compared to when they had no task to perform (F(2,32) = 13.784, p = .038). This suggests that perceived intensities of vection are significantly reduced when people perform attentionally demanding tasks related to visual processing. Conversely, vection intensity or simulated motion speed can affect performance on aural tasks.\n},\n\tauthor = {Kio, G. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:25:21 -0500},\n\tdoi = {10.1167/jov.22.14.3627},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {3627},\n\ttitle = {Effects of simulated and perceived motion on cognitive task performance},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3627}}\n\n
\n
\n\n\n
\n Compelling simulated motion in virtual environments can induce the sensation of self motion (or vection) in stationary observers. While the usefulness and functional significance of vection is still debated, the literature has shown that perceived magnitude of vection is lower when observers perform attentionally demanding cognitive tasks than when attentional demands are absent. Could simulated motion and the resulting vection experienced in virtual environments in turn affect how observers perform various attention demanding tasks? In this study therefore, we investigated how accurately and rapidly observers could perform attention-demanding aural and visual tasks while experiencing levels of vection-inducing motion in a virtual environment. Seventeen adult observers were exposed to different levels of simulated motion at virtual camera speeds of 0 (stationary), 5, 10 and 15 m/s in a straight virtual corridor rendered through a Vive-Pro Virtual Reality headset. During these simulations, they performed aural or visual discrimination tasks, or no task at all. We recorded the accuracy, the time observers took to respond to each task, and the intensity of vection they reported. Repeated Measures ANOVA showed that levels of simulated motion did not significantly affect accuracy on either task (F(3,48) = 1.469, p = .235 aural; F(3,48) = 1.504, p = .226 visual), but significantly affected the response times on aural tasks (F(3,48) = 4.320, p = .009 aural; F(3,48) = 0.916, p = .440 visual). Observers generally perceived less vection at all levels of motion when they performed visual discrimination tasks compared to when they had no task to perform (F(2,32) = 13.784, p = .038). This suggests that perceived intensities of vection are significantly reduced when people perform attentionally demanding tasks related to visual processing. Conversely, vection intensity or simulated motion speed can affect performance on aural tasks. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The impact of conflicting ordinal and metric depth information on depth matching.\n \n \n \n \n\n\n \n Au, D., Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3739. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Au:2022ng,\n\tabstract = {Under natural viewing conditions binocular disparity can provide metric depth information; many of the monocular depth cues, such as occlusion, provide depth order only. Nonetheless, when put in conflict there is evidence that occlusion can influence the direction and magnitude of perceived depth from stereopsis. Here we explored the integration of depth information from occlusion and binocular disparity in complex real-world environments using a depth matching paradigm. The virtual stimulus was a green letter `A' presented using a Microsoft HoloLens augmented reality (AR) display and superimposed on a real frontoparallel surface at 1.2 m. The letter was placed at one of eight positions -- between 0.9 and 1.6 m, including the surface location. Observers matched the distance of a probe to the perceived distance of the letter by moving it with a sliding pole. For comparison, observers performed the same task without the physical surface. Our results show that when the surface was absent or the letter was rendered in front of the surface the letter was accurately localized. However, when the letter was rendered beyond the surface, observers progressively underestimated the letter's distance, even though the relative disparity between the probe and the target should have been equally informative at all locations. This pattern of results suggests that 1) observers are unable to ignore conflicts between occlusion and binocular disparity and 2) the occlusion conflict biases the perceived position of the target in the direction of the occluder. Our results are well modelled using a Bayesian ideal observer with an asymmetric likelihood for an occlusion cue representing letter positions in front of vs beyond the surface. In addition to providing insight into the integration of ordinal and metric depth information, these results speak to the impact of such errors in AR on user interactions.\n},\n\tauthor = {Au, D. and Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:25:17 -0500},\n\tdoi = {10.1167/jov.22.14.3739},\n\tkeywords = {Stereopsis},\n\tpages = {3739},\n\ttitle = {The impact of conflicting ordinal and metric depth information on depth matching},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3739}}\n\n
\n
\n\n\n
\n Under natural viewing conditions binocular disparity can provide metric depth information; many of the monocular depth cues, such as occlusion, provide depth order only. Nonetheless, when put in conflict there is evidence that occlusion can influence the direction and magnitude of perceived depth from stereopsis. Here we explored the integration of depth information from occlusion and binocular disparity in complex real-world environments using a depth matching paradigm. The virtual stimulus was a green letter `A' presented using a Microsoft HoloLens augmented reality (AR) display and superimposed on a real frontoparallel surface at 1.2 m. The letter was placed at one of eight positions – between 0.9 and 1.6 m, including the surface location. Observers matched the distance of a probe to the perceived distance of the letter by moving it with a sliding pole. For comparison, observers performed the same task without the physical surface. Our results show that when the surface was absent or the letter was rendered in front of the surface the letter was accurately localized. However, when the letter was rendered beyond the surface, observers progressively underestimated the letter's distance, even though the relative disparity between the probe and the target should have been equally informative at all locations. This pattern of results suggests that 1) observers are unable to ignore conflicts between occlusion and binocular disparity and 2) the occlusion conflict biases the perceived position of the target in the direction of the occluder. Our results are well modelled using a Bayesian ideal observer with an asymmetric likelihood for an occlusion cue representing letter positions in front of vs beyond the surface. In addition to providing insight into the integration of ordinal and metric depth information, these results speak to the impact of such errors in AR on user interactions. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Locomotor decision-making altered by different walking interfaces in virtual reality.\n \n \n \n \n\n\n \n Kuo, C., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 22, pages 3826. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Locomotor-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kuo:2022fb,\n\tabstract = {Walking interfaces for Virtual Reality often produce proprioceptive, vestibular and somatosensory signals which conflict with the visual presentation of terrain conditions in virtual environments. We compared locomotion decisions made using a dual joystick gamepad with a walking-in-place metaphor. Each trial presented two choices where the visual path condition differed in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. Users chose one of these paths by using the locomotion interface to walk to a goal. Their decisions were recorded and analyzed as a generalized linear mixed model. The results suggest that the walking-in-place interface produces choices of visual conditions that more often reflect expectations of walking in the real world: decisions that minimize energy expended or risk of injury. Because of this, we can infer that different walking interfaces can produce different results in virtual reality experiments. Therefore, behavioral scientists should be wary that sensory discrepancies between visual presentation and other modalities can negatively affect the ecological validity of studies using virtual reality. Consideration should be taken designing these studies to ensure that sensory inputs are as natural and consistent between modalities as possible.},\n\tauthor = {Kuo, C. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2022-12-15 18:24:50 -0500},\n\tdate-modified = {2022-12-15 18:25:34 -0500},\n\tdoi = {10.1167/jov.22.14.3826},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {3826},\n\ttitle = {Locomotor decision-making altered by different walking interfaces in virtual reality},\n\tvolume = {22},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.1167/jov.22.14.3826}}\n\n
\n
\n\n\n
\n Walking interfaces for Virtual Reality often produce proprioceptive, vestibular and somatosensory signals which conflict with the visual presentation of terrain conditions in virtual environments. We compared locomotion decisions made using a dual joystick gamepad with a walking-in-place metaphor. Each trial presented two choices where the visual path condition differed in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. Users chose one of these paths by using the locomotion interface to walk to a goal. Their decisions were recorded and analyzed as a generalized linear mixed model. The results suggest that the walking-in-place interface produces choices of visual conditions that more often reflect expectations of walking in the real world: decisions that minimize energy expended or risk of injury. Because of this, we can infer that different walking interfaces can produce different results in virtual reality experiments. Therefore, behavioral scientists should be wary that sensory discrepancies between visual presentation and other modalities can negatively affect the ecological validity of studies using virtual reality. Consideration should be taken designing these studies to ensure that sensory inputs are as natural and consistent between modalities as possible.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The perception of object size in microgravity.\n \n \n \n\n\n \n Bansal, A. T., Jorges, B, Bury, N., McManus, M., Allison, R. S., Jenkin, M. R. M., & Harris, L. R.\n\n\n \n\n\n\n In 2022 Scientific Abstracts: The First Canadian Space Health Research Symposium, pages 1. 2022.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bansal:2022ux,\n\tabstract = {One of the most common, and most complex\nfunctions of the human brain is to perceive our\nown motion. Estimating how far we have travelled\nis a multisensory process, although the relative\ncontributions from our different sensory systems in\nestimating travel distance is still unknown. Testing\nastronauts in microgravity not only allows us to\nparse out the contributions from the different\nsenses more easily, but it can also inform mission\nplanners and trainers about how our perception of\ntravel distance might change in microgravity. Using\nVR, we tested astronauts' (n=12, 6 female)\nperceived travel distance 5 times: once before their\nflight, twice in space (upon arrival and 3 months\nafter), and twice again when they returned back to\nEarth (upon reentry and 2 months after).\nPreliminary results show no differences between\nthe astronauts' estimations of travel distance after\narriving to the ISS, after 3 months in space, or when\nthey returned to Earth. These findings not only\nprovide insights into the sensory contributions\ninvolved in making travel distance estimates, but\nalso indicate that there is no adverse effect of long-\nduration exposure to microgravity on perceived\ntravel distance.},\n\tannote = {The symposium will start early morning on November 17, 2022, and end late afternoon on November 18, 2022 in Calgary},\n\tauthor = {Bansal, A. T. and Jorges, B and Bury, N. and McManus, M. and Allison, R. S. and Jenkin, M. R. M. and Harris, L. R.},\n\tbooktitle = {2022 Scientific Abstracts: The First Canadian Space Health Research Symposium},\n\tdate-added = {2022-11-30 13:51:32 -0500},\n\tdate-modified = {2022-11-30 13:51:32 -0500},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {1},\n\ttitle = {The perception of object size in microgravity},\n\tyear = {2022}}\n\n
\n
\n\n\n
\n One of the most common, and most complex functions of the human brain is to perceive our own motion. Estimating how far we have travelled is a multisensory process, although the relative contributions from our different sensory systems in estimating travel distance is still unknown. Testing astronauts in microgravity not only allows us to parse out the contributions from the different senses more easily, but it can also inform mission planners and trainers about how our perception of travel distance might change in microgravity. Using VR, we tested astronauts' (n=12, 6 female) perceived travel distance 5 times: once before their flight, twice in space (upon arrival and 3 months after), and twice again when they returned back to Earth (upon reentry and 2 months after). Preliminary results show no differences between the astronauts' estimations of travel distance after arriving to the ISS, after 3 months in space, or when they returned to Earth. These findings not only provide insights into the sensory contributions involved in making travel distance estimates, but also indicate that there is no adverse effect of long- duration exposure to microgravity on perceived travel distance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The perception of object size in microgravity.\n \n \n \n\n\n \n Jorges, B, Bury, N., McManus, M., Bansal, A., Allison, R. S., Jenkin, M. R. M., & Harris, L. R.\n\n\n \n\n\n\n In 2022 Scientific Abstracts: The First Canadian Space Health Research Symposium, pages 8. 2022.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jorges:2022ux,\n\tabstract = {Exposure to microgravity can influence the visual\nperception of object size, however the mechanism\nremains an object of debate. Gravity might serve as\na reference frame in which visual information is\ninterpreted. The absence of gravity should make\nsize judgements then more variable due to the\ninability to anchor these judgements. We tested\nthis hypothesis by assessing accuracy and\nvariability of astronauts' size judgements before,\nduring, and after a six-month or longer\nmicrogravity exposure in orbit. 12 astronauts were\ntested before take-off, within 7 days of arrival on\nthe ISS, around 90 days after arrival, within 7 days\nof return to Earth and at least 60 days after return.\nWe found that variability was, indeed, higher upon\narrival on the ISS (p = 0.03), but not later during\nspace flight. Further, astronauts but not control\nparticipants -- surprisingly -- perceived the object to\nbe significantly smaller (p = 0.04) at their last test\nsession than at their first session, suggesting lasting\nchanges in their perception. Overall, our data\nprovides additional support that gravity may\nindeed serve as a reference frame in which visual\ninput is interpreted for size judgements.},\n\tannote = {The symposium will start early morning on November 17, 2022, and end late afternoon on November 18, 2022 in Calgary},\n\tauthor = {Jorges, B and Bury, N. and McManus, M. and Bansal, A. and Allison, R. S. and Jenkin, M. R. M. and Harris, L. R.},\n\tbooktitle = {2022 Scientific Abstracts: The First Canadian Space Health Research Symposium},\n\tdate-added = {2022-11-30 13:51:32 -0500},\n\tdate-modified = {2022-11-30 13:51:32 -0500},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {8},\n\ttitle = {The perception of object size in microgravity},\n\tyear = {2022}}\n\n
\n
\n\n\n
\n Exposure to microgravity can influence the visual perception of object size, however the mechanism remains an object of debate. Gravity might serve as a reference frame in which visual information is interpreted. The absence of gravity should make size judgements then more variable due to the inability to anchor these judgements. We tested this hypothesis by assessing accuracy and variability of astronauts' size judgements before, during, and after a six-month or longer microgravity exposure in orbit. 12 astronauts were tested before take-off, within 7 days of arrival on the ISS, around 90 days after arrival, within 7 days of return to Earth and at least 60 days after return. We found that variability was, indeed, higher upon arrival on the ISS (p = 0.03), but not later during space flight. Further, astronauts but not control participants – surprisingly – perceived the object to be significantly smaller (p = 0.04) at their last test session than at their first session, suggesting lasting changes in their perception. Overall, our data provides additional support that gravity may indeed serve as a reference frame in which visual input is interpreted for size judgements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From Earth to Space: the Effect of Gravity and Sex on Self-Motion Perception.\n \n \n \n \n\n\n \n Bury, N., Harris, L. R, Jenkin, M., Allison, R. S, Felsner, S., & Herpers, R.\n\n\n \n\n\n\n In From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022, pages 53. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"From-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{bury2022earth,\n\tauthor = {Bury, Nils-Alexander and Harris, Laurence R and Jenkin, Michael and Allison, Robert S and Felsner, Sandra and Herpers, Rainer},\n\tbooktitle = {From Picture to Reality, from Observer to Agent. Vision Research Conference, Second Student Centre, York University, June 6-9, 2022},\n\tdate-added = {2022-07-06 11:47:53 -0400},\n\tdate-modified = {2023-10-14 15:24:05 -0400},\n\tdoi = {10.25071/10315/39491},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {53},\n\ttitle = {From Earth to Space: the Effect of Gravity and Sex on Self-Motion Perception},\n\tyear = {2022},\n\turl-1 = {https://doi.org/10.25071/10315/39491}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Nonlinear analysis of the effects of vision and postural threat on upright stance.\n \n \n \n\n\n \n Weinberg, S., Palmisano, S., Allison, R. S., & Cleworth, T.\n\n\n \n\n\n\n In International Society of Posture and Gait Research (ISPGR) World Congress 2022, pages 381-382. 2022.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Weinberg:pb,\n\tabstract = {\nBackground and Aim:\nThe ability to control and maintain an upright standing posture is crucial for humans interacting with their environment. Many factors, such as a fear of falling as observed when exposed to a postural threat [1], can cause changes in postural stability. The ability to quantify changes in postural stability is critical to understand psychological (and physiological) effects on balance. Therefore, the goal of the study is to use linear and nonlinear analyses to identify the effects of vision and postural threat on upright stance.\n\nMethods:\nThis study involves re-examining the dataset previously reported [2]. This secondary analysis was conducted as the initial analysis did not examine the sway temporal dynamics. Twenty young healthy adults stood on a force plate mounted to a hydraulic lift at two height conditions, 0.8m (LOW) and 3.2m (HIGH). Both height conditions were performed with both eyes open (EO) and closed (EC). Participants stood quietly for 60 seconds on a force plate, and centre of pressure (COP) was calculated from ground reaction forces and moments. For the linear analyses, anterior-posterior COP root mean square (RMS) and mean power frequency (MPF) were calculated. For the nonlinear analysis, recurrence plots were generated from the COP data. These plots provided a visualization of the timepoints in which the trajectory returns to a location it has visited before. A recurrence quantification analysis (RQA) was then used to quantify the number and duration of recurrences. RQA measures included recurrence rate, determinism, entropy, and average diagonal line length.\n\nResults:\nFor the linear analyses, COP RMS showed no effect of vision or of a vision-height interaction; however, a main effect of height was observed, with sway amplitude decreasing in the HIGH compared to LOW condition. For COP MPF, main effects were found for both height and vision, with frequency increasing in the HIGH compared to LOW condition, as well as increasing in EC compared to EO. For the nonlinear analysis, there was a main effect of both vision and height, with all RQA measures decreasing in the HIGH compared to LOW condition, and decreasing in EC compared to EO.\n\nConclusions:\nBoth linear and nonlinear analyses revealed differences across height and visual conditions. When standing at height, a decrease in amplitude and increase in frequency was observed, thought to resemble a stiffening strategy [1]. The decreases in RQA measures across height and visual conditions may provide additional evidence for a change in postural strategy. These changes observed across conditions might be suggestive of the participant trying to deliberately minimize their sway magnitude, but end up resulting in higher frequency and less predictable sway patterns. Given the nonlinear analysis identifies changes in visual (and height) conditions, this study shows a need to go beyond traditional linear measures when assessing balance. Nonlinear measures can enhance our understanding of postural stability and should be used in future analyses with the potential to identify changes that linear measures may not detect.\n\nReferences: [1] Carpenter et al., Exp Brain Res, 2001; [2] Cleworth & Carpenter, Neurosci Lett, 2016.\nAcknowledgements: Funded by VISTA and NSERC\n},\n\tannote = {ISPGR World Congress 2022\nJULY 3 -- 7, MONTREAL, CANADA\n\nP2-X-153\n\nhttps://ispgr.org/wp-content/uploads/2022/06/ISPGR_Abstracts_June21.pdf},\n\tauthor = {Sara Weinberg and Stephen Palmisano and Robert S. Allison and Taylor Cleworth},\n\tbooktitle = {International Society of Posture and Gait Research (ISPGR) World Congress 2022},\n\tdate-added = {2022-07-04 07:35:24 -0400},\n\tdate-modified = {2022-07-04 07:35:24 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {381-382},\n\ttitle = {Nonlinear analysis of the effects of vision and postural threat on upright stance},\n\tyear = {2022},\n\tbdsk-file-1 = {YnBsaXN0MDDSAQIDBFxyZWxhdGl2ZVBhdGhZYWxpYXNEYXRhXxAwLi4vLi4vLi4vLi4vLi4vRG93bmxvYWRzL0lTUEdSX1dlaW5iZXJnXzIwMjIucGRmTxEBcAAAAAABcAACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAAAAAAAEJEAAH/////F0lTUEdSX1dlaW5iZXJnXzIwMjIucGRmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP////8AAAAAAAAAAAAAAAAABQACAAAKIGN1AAAAAAAAAAAAAAAAAAlEb3dubG9hZHMAAAIAMS86VXNlcnM6YWxsaXNvbjpEb3dubG9hZHM6SVNQR1JfV2VpbmJlcmdfMjAyMi5wZGYAAA4AMAAXAEkAUwBQAEcAUgBfAFcAZQBpAG4AYgBlAHIAZwBfADIAMAAyADIALgBwAGQAZgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASAC9Vc2Vycy9hbGxpc29uL0Rvd25sb2Fkcy9JU1BHUl9XZWluYmVyZ18yMDIyLnBkZgAAEwABLwAAFQACAA7//wAAAAgADQAaACQAVwAAAAAAAAIBAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAHL}}\n\n
\n
\n\n\n
\n Background and Aim: The ability to control and maintain an upright standing posture is crucial for humans interacting with their environment. Many factors, such as a fear of falling as observed when exposed to a postural threat [1], can cause changes in postural stability. The ability to quantify changes in postural stability is critical to understand psychological (and physiological) effects on balance. Therefore, the goal of the study is to use linear and nonlinear analyses to identify the effects of vision and postural threat on upright stance. Methods: This study involves re-examining the dataset previously reported [2]. This secondary analysis was conducted as the initial analysis did not examine the sway temporal dynamics. Twenty young healthy adults stood on a force plate mounted to a hydraulic lift at two height conditions, 0.8m (LOW) and 3.2m (HIGH). Both height conditions were performed with both eyes open (EO) and closed (EC). Participants stood quietly for 60 seconds on a force plate, and centre of pressure (COP) was calculated from ground reaction forces and moments. For the linear analyses, anterior-posterior COP root mean square (RMS) and mean power frequency (MPF) were calculated. For the nonlinear analysis, recurrence plots were generated from the COP data. These plots provided a visualization of the timepoints in which the trajectory returns to a location it has visited before. A recurrence quantification analysis (RQA) was then used to quantify the number and duration of recurrences. RQA measures included recurrence rate, determinism, entropy, and average diagonal line length. Results: For the linear analyses, COP RMS showed no effect of vision or of a vision-height interaction; however, a main effect of height was observed, with sway amplitude decreasing in the HIGH compared to LOW condition. For COP MPF, main effects were found for both height and vision, with frequency increasing in the HIGH compared to LOW condition, as well as increasing in EC compared to EO. For the nonlinear analysis, there was a main effect of both vision and height, with all RQA measures decreasing in the HIGH compared to LOW condition, and decreasing in EC compared to EO. Conclusions: Both linear and nonlinear analyses revealed differences across height and visual conditions. When standing at height, a decrease in amplitude and increase in frequency was observed, thought to resemble a stiffening strategy [1]. The decreases in RQA measures across height and visual conditions may provide additional evidence for a change in postural strategy. These changes observed across conditions might be suggestive of the participant trying to deliberately minimize their sway magnitude, but end up resulting in higher frequency and less predictable sway patterns. Given the nonlinear analysis identifies changes in visual (and height) conditions, this study shows a need to go beyond traditional linear measures when assessing balance. Nonlinear measures can enhance our understanding of postural stability and should be used in future analyses with the potential to identify changes that linear measures may not detect. References: [1] Carpenter et al., Exp Brain Res, 2001; [2] Cleworth & Carpenter, Neurosci Lett, 2016. Acknowledgements: Funded by VISTA and NSERC \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling the impacts of inter-display and inter-lens separation on perceived slant in Virtual Reality Head-mounted displays.\n \n \n \n \n\n\n \n Tong, J., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In MODVIS Workshop. 05 2022.\n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n \n \"Modeling-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{tong_modeling_2022,\n\tabstract = {Projective geometry predicts that a mismatch between user interpupillary-distance (IPD) and\nthe inter-axial separation of stereo cameras used to render imagery in VR will result in\ndistortions of perceived scale. A potentially important, but often overlooked, consequence of a\nmismatch between user IPD and VR lens separation is the impact on binocular convergence.\nHere we describe a geometric model that incorporates shifts in binocular convergence due to\nthe prismatic effect of decentered lenses, as well as the offset of dual displays relative to the\neyes, and predicts biases in perceived slant. The model predicts that when the inter-lens and\ninter-display separation is less than an observer's IPD, perceived slant will be biased towards\nfrontoparallel. Conversely when the inter-lens and inter-display separation is greater than an\nobserver's IPD, perceived slant will be increased. These predictions were tested and\nconfirmed in a VR headset with adjustable inter-lens and display separation (both coupled). In\nthe experiment, observers completed a fold adjustment task in which they adjusted the angle\nbetween two intersecting, textured surfaces until they appeared to be perpendicular to one\nanother. The task was performed at three randomly interleaved viewing distances,\nmonocularly and binocularly. In separate blocks, the inter-lens and display separation was\neither matched to the observer's IPD (baseline condition) or set to the minimum or maximum\nallowed by the headset (IPD-mismatch conditions). When the inter-lens and display\nseparation was less than the observers' IPD they underestimated surface slant relative to\nbaseline, and the reverse pattern was seen when the inter-lens and display separation was\ngreater than their IPD. Overall, the geometric model tended to overestimate the effect of IPD-\nmismatch on perceived slant, especially at the farther viewing distances. We extended the\nmodel to incorporate the relative weighting of monocular and binocular cues, resulting in an\noverall improvement in the model fits. Our model provides researchers and VR-systems-\ndesigners a means of predicting depth perception when the optics of head-mounted displays\nmay not be aligned with users' eyes.},\n\tauthor = {Tong, Jonathan and Wilcox, Laurie and Allison, Robert},\n\tbooktitle = {{MODVIS} {Workshop}},\n\tdate-added = {2022-05-12 13:58:12 -0400},\n\tdate-modified = {2022-07-04 07:34:41 -0400},\n\tfile = {Purdue e-Pubs - MODVIS Workshop\\: Modeling the impacts of inter-display and inter-lens separation on perceived slant in Virtual Reality Head-mounted displays:/Users/robertallison/Zotero/storage/K8A2H8Q4/2.html:text/html},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = 05,\n\ttitle = {Modeling the impacts of inter-display and inter-lens separation on perceived slant in {Virtual} {Reality} {Head}-mounted displays},\n\turl = {https://docs.lib.purdue.edu/modvis/2022/session02/2},\n\tyear = {2022},\n\turl-1 = {https://docs.lib.purdue.edu/modvis/2022/session02/2}}\n\n
\n
\n\n\n
\n Projective geometry predicts that a mismatch between user interpupillary-distance (IPD) and the inter-axial separation of stereo cameras used to render imagery in VR will result in distortions of perceived scale. A potentially important, but often overlooked, consequence of a mismatch between user IPD and VR lens separation is the impact on binocular convergence. Here we describe a geometric model that incorporates shifts in binocular convergence due to the prismatic effect of decentered lenses, as well as the offset of dual displays relative to the eyes, and predicts biases in perceived slant. The model predicts that when the inter-lens and inter-display separation is less than an observer's IPD, perceived slant will be biased towards frontoparallel. Conversely when the inter-lens and inter-display separation is greater than an observer's IPD, perceived slant will be increased. These predictions were tested and confirmed in a VR headset with adjustable inter-lens and display separation (both coupled). In the experiment, observers completed a fold adjustment task in which they adjusted the angle between two intersecting, textured surfaces until they appeared to be perpendicular to one another. The task was performed at three randomly interleaved viewing distances, monocularly and binocularly. In separate blocks, the inter-lens and display separation was either matched to the observer's IPD (baseline condition) or set to the minimum or maximum allowed by the headset (IPD-mismatch conditions). When the inter-lens and display separation was less than the observers' IPD they underestimated surface slant relative to baseline, and the reverse pattern was seen when the inter-lens and display separation was greater than their IPD. Overall, the geometric model tended to overestimate the effect of IPD- mismatch on perceived slant, especially at the farther viewing distances. We extended the model to incorporate the relative weighting of monocular and binocular cues, resulting in an overall improvement in the model fits. Our model provides researchers and VR-systems- designers a means of predicting depth perception when the optics of head-mounted displays may not be aligned with users' eyes.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n 75-2: The Effect of Chromatic Aberration Correction on Visually Lossless Compression.\n \n \n \n\n\n \n Mohona, S. S., Au, D., Wilcox, L. M, & Allison, R. S\n\n\n \n\n\n\n In SID Symposium Digest of Technical Papers, volume 53, pages 1013–1016, 2022. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{mohona202275,\n\tauthor = {Mohona, Sanjida Sharmin and Au, Domenic and Wilcox, Laurie M and Allison, Robert S},\n\tbooktitle = {SID Symposium Digest of Technical Papers},\n\tdate-added = {2022-07-06 11:47:53 -0400},\n\tdate-modified = {2022-07-06 11:47:53 -0400},\n\tkeywords = {Image Quality},\n\tnumber = {1},\n\tpages = {1013--1016},\n\ttitle = {75-2: The Effect of Chromatic Aberration Correction on Visually Lossless Compression},\n\tvolume = {53},\n\tyear = {2022}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (5)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Sensors for Fire and Smoke Monitoring.\n \n \n \n \n\n\n \n Allison, R. S., Johnston, J. M., & Wooster, M.\n\n\n \n\n\n\n Sensors, 21(16): 5402.1-5402.3. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Sensors-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2021vq,\n\tauthor = {Allison, R. S. and Johnston, J. M. and Wooster, M.},\n\tdate-added = {2021-08-04 22:03:17 -0400},\n\tdate-modified = {2021-09-07 10:36:18 -0400},\n\tdoi = {10.3390/s21165402},\n\tjournal = {Sensors},\n\tkeywords = {Misc.},\n\tnumber = {16},\n\tpages = {5402.1-5402.3},\n\ttitle = {Sensors for Fire and Smoke Monitoring},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.3390/s21165402}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subjective Assessment of Display Stream Compression for Stereoscopic Imagery.\n \n \n \n \n\n\n \n Mohona, S., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Journal of the Society for Information Display, 29(8): 591-607. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Subjective-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Mohona:aa,\n\tabstract = {High-resolution display bandwidth requirements often now exceed the capacity of display link channels necessitating compression. The goal of visually lossless compression codecs such as VESA DSC 1.2 is that viewers perceive no difference between the compressed and uncompressed images, maintaining long-standing expectations of a lossless display link. Such low impairment performance is difficult to validate as artifacts are at or below sensory threshold. We have developed a 3D version of the ISO/IEC 29170-2 flicker paradigm and used it to compare the effects of image compression in flat images presented in the plane of the screen (2D) to compression in flat images with a disparity offset from the screen (3D). We hypothesized that differences in the location and size of the compression errors between the disparate images in the 3D case would affect their visibility. The results showed that artifacts were often less visible in 3D compared to 2D viewing. These findings have practical applications with respect to codec performance targets and algorithm development for 3D movie, animation, and virtual reality content. In particular, higher compression should be attainable in stereoscopic compared to equivalent 2D images because of increased tolerance to artifacts that are binocularly unmatched or have disparity relative to the screen.},\n\tauthor = {Mohona, S.S. and Wilcox, L. M. and Allison, R. S.},\n\tdate-added = {2021-02-18 14:16:38 -0500},\n\tdate-modified = {2021-08-02 16:48:34 -0400},\n\tdoi = {10.1002/jsid.1002},\n\tjournal = {Journal of the Society for Information Display},\n\tkeywords = {Stereopsis},\n\tnumber = {8},\n\tpages = {591-607},\n\ttitle = {Subjective Assessment of Display Stream Compression for Stereoscopic Imagery},\n\tvolume = {29},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1002/jsid.1002}}\n\n
\n
\n\n\n
\n High-resolution display bandwidth requirements often now exceed the capacity of display link channels necessitating compression. The goal of visually lossless compression codecs such as VESA DSC 1.2 is that viewers perceive no difference between the compressed and uncompressed images, maintaining long-standing expectations of a lossless display link. Such low impairment performance is difficult to validate as artifacts are at or below sensory threshold. We have developed a 3D version of the ISO/IEC 29170-2 flicker paradigm and used it to compare the effects of image compression in flat images presented in the plane of the screen (2D) to compression in flat images with a disparity offset from the screen (3D). We hypothesized that differences in the location and size of the compression errors between the disparate images in the 3D case would affect their visibility. The results showed that artifacts were often less visible in 3D compared to 2D viewing. These findings have practical applications with respect to codec performance targets and algorithm development for 3D movie, animation, and virtual reality content. In particular, higher compression should be attainable in stereoscopic compared to equivalent 2D images because of increased tolerance to artifacts that are binocularly unmatched or have disparity relative to the screen.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic depth constancy from a different direction.\n \n \n \n \n\n\n \n Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n Vision Research, 178: 70-78. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:jh,\n\tauthor = {Allison, R. S. and Wilcox, L. M.},\n\tdate-added = {2020-10-14 15:39:01 -0400},\n\tdate-modified = {2020-11-05 13:21:05 -0500},\n\tdoi = {10.1016/j.visres.2020.10.003},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tpages = {70-78},\n\ttitle = {Stereoscopic depth constancy from a different direction},\n\tvolume = {178},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1016/j.visres.2020.10.003}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of motion picture frame rate on material and texture appearance.\n \n \n \n \n\n\n \n Allison, R., Fujii, Y., & Wilcox, L. M.\n\n\n \n\n\n\n IEEE Transactions on Broadcasting, 67(2): 360-371. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2020rc,\n\tauthor = {Allison, R.S. and Fujii, Y. and Wilcox, L. M.},\n\tdate-added = {2020-09-14 16:17:34 -0400},\n\tdate-modified = {2021-06-04 17:30:49 -0400},\n\tdoi = {10.1109/TBC.2020.3028276},\n\tjournal = {IEEE Transactions on Broadcasting},\n\tkeywords = {Image Quality},\n\tnumber = {2},\n\tpages = {360-371},\n\ttitle = {Effects of motion picture frame rate on material and texture appearance},\n\tvolume = {67},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1109/TBC.2020.3028276}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Role of Binocular Vision in Avoiding Virtual Obstacles While Walking.\n \n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n IEEE Transactions on Visualization and Computer Graphics, 27(7): 3277-3288. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Zhao:fx,\n\tabstract = {Stereopsis has been shown to aid activities related to hand-eye coordination but it is less clear that stereopsis provides advantages in locomotion activities, such as walking and running, as steady viewing is needed to let stereopsis achieve maximum precision. While previous research has shown that stereopsis also helps people to make more accurate lower limb movements, these studies were conducted in setups with limited walking distances that did not represent typical walking scenarios in our everyday life --- we usually walk continuously over longer distances. Thus, it is still uncertain whether stereopsis helps people to make more accurate movements under constant motion during continuous walking. In the present study, we conducted two walking experiments in virtual environments using a linear treadmill and a novel projected display known as the Wide Immersive Stereo Environment (WISE) to study the role of stereopsis in continuous walking. The first experiment investigated the walking performance of people stepping over obstacles while the second experiment focused on a scenario on stepping over gaps. Both experiments were conducted under both stereoscopic viewing and non-stereoscopic viewing conditions. By analyzing the gait parameters, we found that stereopsis helped people to make more accurate movements to step over obstacles and gaps in continuous walking.},\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tdate-added = {2020-01-24 21:51:34 -0500},\n\tdate-modified = {2023-10-27 11:07:34 -0400},\n\tdoi = {10.1109/TVCG.2020.2969181},\n\tjournal = {IEEE Transactions on Visualization and Computer Graphics},\n\tkeywords = {Stereopsis},\n\tnumber = {7},\n\tpages = {3277-3288},\n\ttitle = {The Role of Binocular Vision in Avoiding Virtual Obstacles While Walking},\n\turl = {https://percept.eecs.yorku.ca/papers/zhao tvgc 2020 preprint.pdf},\n\tvolume = {27},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1109/TVCG.2020.2969181}}\n\n
\n
\n\n\n
\n Stereopsis has been shown to aid activities related to hand-eye coordination but it is less clear that stereopsis provides advantages in locomotion activities, such as walking and running, as steady viewing is needed to let stereopsis achieve maximum precision. While previous research has shown that stereopsis also helps people to make more accurate lower limb movements, these studies were conducted in setups with limited walking distances that did not represent typical walking scenarios in our everyday life — we usually walk continuously over longer distances. Thus, it is still uncertain whether stereopsis helps people to make more accurate movements under constant motion during continuous walking. In the present study, we conducted two walking experiments in virtual environments using a linear treadmill and a novel projected display known as the Wide Immersive Stereo Environment (WISE) to study the role of stereopsis in continuous walking. The first experiment investigated the walking performance of people stepping over obstacles while the second experiment focused on a scenario on stepping over gaps. Both experiments were conducted under both stereoscopic viewing and non-stereoscopic viewing conditions. By analyzing the gait parameters, we found that stereopsis helped people to make more accurate movements to step over obstacles and gaps in continuous walking.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n book\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Sensors for Fire and Smoke Monitoring.\n \n \n \n\n\n \n Allison, R. S., Johnston, J. M., & Wooster, M.,\n editors.\n \n\n\n \n\n\n\n MDPI, Basel, Switzerland, 2021.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@book{allisonbook:2021la,\n\taddress = {Basel, Switzerland},\n\tdate-added = {2021-09-17 10:52:42 -0400},\n\tdate-modified = {2022-07-04 07:34:53 -0400},\n\teditor = {Allison, R. S. and Johnston, J. M. and Wooster, M.},\n\tkeywords = {Misc.},\n\tpublisher = {MDPI},\n\ttitle = {Sensors for Fire and Smoke Monitoring},\n\tyear = {2021}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Perception of Self-Motion in Microgravity.\n \n \n \n \n\n\n \n Harris, L. R., Jorges, B, Bury, N., McManus, M, Allison, R. S., & Jenkin, M\n\n\n \n\n\n\n In IAA Humans in Space Conference. Moscow, Russia, 04 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Harris:2021bh,\n\tabstract = {Moving around in a zero-gravity environment is very different from moving on Earth. The vestibular system in 0g registers only the accelerations associated with movement and no longer has to distinguish them from the acceleration of gravity. How does this affect an astronaut's perception of space and movement?  Here we explore how the perception of self-motion and distance changes during and following long-duration exposure to 0g. Our hypothesis was that absence of gravity cues should lead participants to rely more strongly on visual information in 0g compared to on Earth. We tested a cohort of ISS astronauts five times: before flight, twice during flight (within 6 days of arrival in space and after 3 months in 0g) and twice after flight (within 6 days of re-entry and 2 months after returning). Data collection is on-going, but we have currently tested 8 out of 10 participants. Using Virtual Reality, astronauts performed two tasks. Task 1, the perception of self-motion task, measures how much visual motion is required to create the sensation of moving through a particular distance. Astronauts viewed a target at one of several distances in front of them in a virtual corridor. The target then disappeared, and they experienced visually simulated self-motion along the corridor and pressed a button to indicate when they had reached the position of the remembered target. Task 2 was the perception of distance task. We presented a virtual cube in the same corridor and asked the astronauts to judge whether the cube's sides were longer or shorter than a reference length they held in their hands. We inferred the distance at which they perceived the target from the size that they chose to match the reference length. Preliminary analysis of the results with Linear Mixed-Effects Modelling suggests that participants did not experience any differences in perceived self-motion on first arriving in space (p = 0.783). After being in space for three months, however, they needed significantly more visual motion (7.5\\%) to create the impression they had passed through the target distance (p < 0.001), indicating that visual motion (optic flow) elicited a weaker sense of self-motion than before adapting to the space environment. Astronauts also made size matches that were consistent with underestimating perceived distance in space (on arrival: 26.6\\% closer, p < 0.001; after 3 months: 26.3\\% closer, p < 0.001) compared to the pre-test on Earth. Our results indicate that prolonged exposure to 0g tends to decrease the effective use of visual information for the perception of travelled distance. This effect cannot be explained in terms of biased distance perception. Knowing that astronauts are likely to misperceive their self-motion and the scale their environment is critical information for the design of safe operations in space and for readjustment to other gravity levels found on the Moon and Mars.\n \nWe acknowledge the generous support of the Canadian Space Agency (15ILSRA1-York).},\n\taddress = {Moscow, Russia},\n\tannote = {HIS\n23rd IAA Humans in Space05-08 April 2021},\n\tauthor = {Harris, L. R. and Jorges, B and Bury, N. and McManus, M and Allison, R. S. and Jenkin, M},\n\tbooktitle = {IAA Humans in Space Conference},\n\tdate-added = {2023-03-21 17:32:59 -0400},\n\tdate-modified = {2023-03-21 17:32:59 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {04},\n\ttitle = {The Perception of Self-Motion in Microgravity},\n\turl = {https://iaaspace.org/event/23rd-iaa-humans-in-space-symposium-2021/},\n\tyear = {2021},\n\turl-1 = {https://iaaspace.org/event/23rd-iaa-humans-in-space-symposium-2021/}}\n\n
\n
\n\n\n
\n Moving around in a zero-gravity environment is very different from moving on Earth. The vestibular system in 0g registers only the accelerations associated with movement and no longer has to distinguish them from the acceleration of gravity. How does this affect an astronaut's perception of space and movement? Here we explore how the perception of self-motion and distance changes during and following long-duration exposure to 0g. Our hypothesis was that absence of gravity cues should lead participants to rely more strongly on visual information in 0g compared to on Earth. We tested a cohort of ISS astronauts five times: before flight, twice during flight (within 6 days of arrival in space and after 3 months in 0g) and twice after flight (within 6 days of re-entry and 2 months after returning). Data collection is on-going, but we have currently tested 8 out of 10 participants. Using Virtual Reality, astronauts performed two tasks. Task 1, the perception of self-motion task, measures how much visual motion is required to create the sensation of moving through a particular distance. Astronauts viewed a target at one of several distances in front of them in a virtual corridor. The target then disappeared, and they experienced visually simulated self-motion along the corridor and pressed a button to indicate when they had reached the position of the remembered target. Task 2 was the perception of distance task. We presented a virtual cube in the same corridor and asked the astronauts to judge whether the cube's sides were longer or shorter than a reference length they held in their hands. We inferred the distance at which they perceived the target from the size that they chose to match the reference length. Preliminary analysis of the results with Linear Mixed-Effects Modelling suggests that participants did not experience any differences in perceived self-motion on first arriving in space (p = 0.783). After being in space for three months, however, they needed significantly more visual motion (7.5%) to create the impression they had passed through the target distance (p < 0.001), indicating that visual motion (optic flow) elicited a weaker sense of self-motion than before adapting to the space environment. Astronauts also made size matches that were consistent with underestimating perceived distance in space (on arrival: 26.6% closer, p < 0.001; after 3 months: 26.3% closer, p < 0.001) compared to the pre-test on Earth. Our results indicate that prolonged exposure to 0g tends to decrease the effective use of visual information for the perception of travelled distance. This effect cannot be explained in terms of biased distance perception. Knowing that astronauts are likely to misperceive their self-motion and the scale their environment is critical information for the design of safe operations in space and for readjustment to other gravity levels found on the Moon and Mars. We acknowledge the generous support of the Canadian Space Agency (15ILSRA1-York).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of Chromatic Aberration Compensation on Visibility of Compression Artifacts.\n \n \n \n \n\n\n \n Mohona, S. S., Au, D., Hou, Y., Kio, O. G., Goel, J., Jacobson, N., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In CVR/VISTA Virtual Vision Futures Conference, pages 48. Toronto, Canada, 06 2021.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n \n \"Effects-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Mohona:tb,\n\tabstract = {In virtual and augmented reality displays, lenses focus the near-eye display at a far optical distance \nand  produce  a  large  field  of  view  to  immerse  the  user.  These  lenses  typically  exhibit  considerable \ndistortion  and  cause  chromatic  aberration.  These  are  not  apparent  to  the  user  because  they  are \ntypically  corrected  by  pre-processing  the  image  with  the  opposite  distortion  before  sending  it  to  the \ndisplay.  Such  pre-processing  involves  pre-warping  source  images  with  inverse  pin-cushion  (barrel) \ndistortion  to  correct  for  the pin-cushion transform from  the  display optics  with different  correction  for \neach  colour  channel.  Most  image  compression  algorithms  use  a  colour  space  conversion  before \ncompression which normally improves compression performance by reducing the degree of correlation \nbetween  components.  However,  as  lens  pre-distortion  processing  is  colour  specific  the  spatial \ncorrelation between colour channels is disrupted by this processing; objective analyses suggest that \nthe colour space conversion may not be beneficial under these conditions. Here we used the ISO/IEC \n29170-2 flicker protocol that has been adapted for 3D imagery, to evaluate the sensitivity of two state-\nof-the-art display stream compression algorithms to characteristic distortions resulting from \nstereoscopic head-mounted display pre-processing which either included normal colour \ntransformations or bypassed them. A set of 10 computer-generated stereoscopic high dynamic range \nimages  were  tested.  Images  spanned  a  wide  range  of  content  and  were  designed  to  challenge  the \ncodecs. The pre-processing workflow involved pre-warping the images, compressing with each codec, \nand  finally  de-warping  with  pin-cushion  distortion.  De-warping  was  applied  to  simulate  the  distortion \nfrom magnifying lenses as all images were viewed on a mirror stereoscope without such lenses. The \nmain  image  manipulations  were  the  codec  used,  the  compression  levels  and  whether  the  colour \ntransform  was  bypassed  (bypass-on)  or  not  (bypass-off). Images were compressed at the codec's \nrespective  nominal  production  level  and  at  each  image's  estimated  limit  of  visually lossless \ncompression.  60  observers  were  tested  in  3  groups  of  10  for  both  codecs.  Overall,  we  found  little \nsensitivity to these distortions and our results confirmed that bypassing colour transforms in the codec \ncan be significantly beneficial for some images.},\n\taddress = {Toronto, Canada},\n\tannote = {June 14 -- 17, 2021},\n\tauthor = {Sanjida Sharmin Mohona and Domenic Au and Yuqian Hou and Onoise Gerald Kio and James Goel and Natan Jacobson and Robert S. Allison and Laurie M. Wilcox},\n\tbooktitle = {CVR/VISTA Virtual Vision Futures Conference},\n\tdate-added = {2021-09-06 09:34:40 -0400},\n\tdate-modified = {2021-09-07 10:37:07 -0400},\n\tkeywords = {Image Quality},\n\tmonth = {06},\n\tpages = {48},\n\ttitle = {Effects of Chromatic Aberration Compensation on Visibility of Compression Artifacts},\n\turl = {https://www.yorku.ca/cvr/wp-content/uploads/sites/90/2021/06/VVF-program-updated.pdf},\n\tyear = {2021},\n\turl-1 = {https://www.yorku.ca/cvr/wp-content/uploads/sites/90/2021/06/VVF-program-updated.pdf}}\n\n
\n
\n\n\n
\n In virtual and augmented reality displays, lenses focus the near-eye display at a far optical distance and produce a large field of view to immerse the user. These lenses typically exhibit considerable distortion and cause chromatic aberration. These are not apparent to the user because they are typically corrected by pre-processing the image with the opposite distortion before sending it to the display. Such pre-processing involves pre-warping source images with inverse pin-cushion (barrel) distortion to correct for the pin-cushion transform from the display optics with different correction for each colour channel. Most image compression algorithms use a colour space conversion before compression which normally improves compression performance by reducing the degree of correlation between components. However, as lens pre-distortion processing is colour specific the spatial correlation between colour channels is disrupted by this processing; objective analyses suggest that the colour space conversion may not be beneficial under these conditions. Here we used the ISO/IEC 29170-2 flicker protocol that has been adapted for 3D imagery, to evaluate the sensitivity of two state- of-the-art display stream compression algorithms to characteristic distortions resulting from stereoscopic head-mounted display pre-processing which either included normal colour transformations or bypassed them. A set of 10 computer-generated stereoscopic high dynamic range images were tested. Images spanned a wide range of content and were designed to challenge the codecs. The pre-processing workflow involved pre-warping the images, compressing with each codec, and finally de-warping with pin-cushion distortion. De-warping was applied to simulate the distortion from magnifying lenses as all images were viewed on a mirror stereoscope without such lenses. The main image manipulations were the codec used, the compression levels and whether the colour transform was bypassed (bypass-on) or not (bypass-off). Images were compressed at the codec's respective nominal production level and at each image's estimated limit of visually lossless compression. 60 observers were tested in 3 groups of 10 for both codecs. Overall, we found little sensitivity to these distortions and our results confirmed that bypassing colour transforms in the codec can be significantly beneficial for some images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Body posture affects the perception of visually simulated self-motion.\n \n \n \n \n\n\n \n Jorges, B, Bury, N., McManus, M, Allison, R. S., Jenkin, M, & Harris, L. R.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 21, pages 2301. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Body-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jorges:mz,\n\tabstract = {Perceiving one's self-motion is a multisensory process involving integrating visual, vestibular and other cues. The perception of self-motion can be elicited by visual cues alone (vection) in a stationary observer. In this case, optic flow information compatible with self-motion may be affected by conflicting vestibular cues signaling that the body is not accelerating. Since vestibular cues are less reliable when lying down (Fernandez \\& Goldberg, 1976), conflicting vestibular cues might bias the self-motion percept less when lying down than when upright. To test this hypothesis, we immersed 20 participants in a virtual reality hallway environment and presented targets at different distances ahead of them. The targets then disappeared, and participants experienced optic flow simulating constant-acceleration, straight-ahead self-motion. They indicated by a button press when they felt they had reached the position of the previously-viewed target. Participants also performed a task that assessed biases in distance perception. We showed them virtual boxes at different simulated distances. On each trial, they judged if the height of the box was bigger or smaller than a reference ruler held in their hands. Perceived distance can be inferred from biases in perceived size. They performed both tasks sitting upright and lying supine. Participants needed less optic flow (perceived they had travelled further) to perceive they had reached the target's position when supine than when sitting (by 4.8\\%, bootstrapped 95\\% CI=[3.5\\%;6.4\\%], determined using Linear Mixed Modelling). Participants also judged objects as larger (compatible with closer) when upright than when supine (by 2.5\\%, 95\\% CI=[0.03\\%;4.6\\%], as above). The bias in traveled distance thus cannot be reduced to a bias in perceived distance. These results suggest that vestibular cues impact self-motion distance perception, as they do heading judgements (MacNeilage, Banks, DeAngelis \\& Angelaki, 2010), even when the task could be solved with visual cues alone.},\n\tauthor = {Jorges, B and Bury, N. and McManus, M and Allison, R. S. and Jenkin, M and Harris, L. R.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2021-09-06 09:10:13 -0400},\n\tdate-modified = {2021-09-11 22:21:25 -0400},\n\tdoi = {10.1167/jov.21.9.2301},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {2301},\n\ttitle = {Body posture affects the perception of visually simulated self-motion},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1167/jov.21.9.2301}}\n\n
\n
\n\n\n
\n Perceiving one's self-motion is a multisensory process involving integrating visual, vestibular and other cues. The perception of self-motion can be elicited by visual cues alone (vection) in a stationary observer. In this case, optic flow information compatible with self-motion may be affected by conflicting vestibular cues signaling that the body is not accelerating. Since vestibular cues are less reliable when lying down (Fernandez & Goldberg, 1976), conflicting vestibular cues might bias the self-motion percept less when lying down than when upright. To test this hypothesis, we immersed 20 participants in a virtual reality hallway environment and presented targets at different distances ahead of them. The targets then disappeared, and participants experienced optic flow simulating constant-acceleration, straight-ahead self-motion. They indicated by a button press when they felt they had reached the position of the previously-viewed target. Participants also performed a task that assessed biases in distance perception. We showed them virtual boxes at different simulated distances. On each trial, they judged if the height of the box was bigger or smaller than a reference ruler held in their hands. Perceived distance can be inferred from biases in perceived size. They performed both tasks sitting upright and lying supine. Participants needed less optic flow (perceived they had travelled further) to perceive they had reached the target's position when supine than when sitting (by 4.8%, bootstrapped 95% CI=[3.5%;6.4%], determined using Linear Mixed Modelling). Participants also judged objects as larger (compatible with closer) when upright than when supine (by 2.5%, 95% CI=[0.03%;4.6%], as above). The bias in traveled distance thus cannot be reduced to a bias in perceived distance. These results suggest that vestibular cues impact self-motion distance perception, as they do heading judgements (MacNeilage, Banks, DeAngelis & Angelaki, 2010), even when the task could be solved with visual cues alone.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interpretation of Depth from Scaled Motion Parallax in Virtual Reality.\n \n \n \n \n\n\n \n Teng, X., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 21, pages 2035. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Interpretation-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teng:2021ty,\n\tabstract = {Humans use visual, vestibular, kinesthetic and other cues to effectively navigate through the world. Therefore, conflict between these sources of information has potentially significant implications for human perception of geometric layout. Previous work has found that introducing gain differences between physical and virtual head movement had little effect on distance perception. However, motion parallax is known to be a potent cue to relative depth. In the present study, we explore the impact of conflict between physical and portrayed self-motion on perception of object shape. To do so we varied the gain between virtual and physical head motion (ranging from a factor of 0.5 to 2) and measured the effect on depth perception. Observers viewed a `fold' stimulus, a convex dihedral angle formed by two irregularly-textured, wall-oriented planes connected at a common vertical edge. Stimuli were rendered and presented using head mounted displays (Oculus Rift S or Quest in Rift S emulation mode). On each trial, observers adjusted the angle of the fold till the two joined planes appeared perpendicular. To assess the role of stereopsis we tested binocularly and monocularly. To introduced motion parallax, observers swayed laterally through a distance of 30 cm at 0.5 Hz timed to a metronome beat; this motion was multiplied by the gain to produce the virtual view-point. Our results showed that gain had little effect on depth perception in the binocular test conditions. Using a model incorporating self and object motion, we computed predicted perceived depths based on the adjusted angles and then compared these with each observer's input. The modelled outcomes were very consistent across visual manipulations, suggesting that observers have remarkably accurate perception of object motion under these conditions. Additional analyses predict corresponding variations in distance perception and we will test these hypotheses in future experiments.\n},\n\tauthor = {Teng, X. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2021-09-06 09:10:13 -0400},\n\tdate-modified = {2021-09-06 09:10:13 -0400},\n\tdoi = {10.1167/jov.21.9.2035},\n\tkeywords = {Stereopsis},\n\tpages = {2035},\n\ttitle = {Interpretation of Depth from Scaled Motion Parallax in Virtual Reality},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1167/jov.21.9.2035}}\n\n
\n
\n\n\n
\n Humans use visual, vestibular, kinesthetic and other cues to effectively navigate through the world. Therefore, conflict between these sources of information has potentially significant implications for human perception of geometric layout. Previous work has found that introducing gain differences between physical and virtual head movement had little effect on distance perception. However, motion parallax is known to be a potent cue to relative depth. In the present study, we explore the impact of conflict between physical and portrayed self-motion on perception of object shape. To do so we varied the gain between virtual and physical head motion (ranging from a factor of 0.5 to 2) and measured the effect on depth perception. Observers viewed a `fold' stimulus, a convex dihedral angle formed by two irregularly-textured, wall-oriented planes connected at a common vertical edge. Stimuli were rendered and presented using head mounted displays (Oculus Rift S or Quest in Rift S emulation mode). On each trial, observers adjusted the angle of the fold till the two joined planes appeared perpendicular. To assess the role of stereopsis we tested binocularly and monocularly. To introduced motion parallax, observers swayed laterally through a distance of 30 cm at 0.5 Hz timed to a metronome beat; this motion was multiplied by the gain to produce the virtual view-point. Our results showed that gain had little effect on depth perception in the binocular test conditions. Using a model incorporating self and object motion, we computed predicted perceived depths based on the adjusted angles and then compared these with each observer's input. The modelled outcomes were very consistent across visual manipulations, suggesting that observers have remarkably accurate perception of object motion under these conditions. Additional analyses predict corresponding variations in distance perception and we will test these hypotheses in future experiments. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The perception of average slant is biased in concave surfaces.\n \n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 21, pages 2011. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2021wf,\n\tabstract = {While much is known about our perception of surface slant for planar surfaces, less attention has been paid to our ability to estimate the average slant of curved surfaces. The average slant across a surface with symmetric curvature (a parabolic surface) and globally slanted about its axis of symmetry is equivalent to that of a planar surface slanted by the same degree. Therefore, if symmetrically curved surfaces are perceived accurately, observers' estimates of their average surface slant should be the same as for an equivalently slanted planar surface. Here we evaluated this prediction using a 2-alternative forced choice slant discrimination task. Observers (n=10) viewed a standard 15$\\,^{\\circ}$ (top-away) slanted planar surface and a comparison surface that varied in slant between 7.5$\\,^{\\circ}$ and 22.5$\\,^{\\circ}$; both were presented stereoscopically and textured with a Voronoi pattern. In separate conditions, the comparison surface was either planar, or parabolically curved (peak displacement = 0.15m) about its axis of rotation in a concave or convex direction. Observers consistently underestimated the average slant of the concave comparison surface relative to the planar surface. This bias is predicted by the effect of curvature modulating the degree of foreshortening in the perspective projection of a slanted surface. Perspective projection also predicts overestimation of average slant in convex surfaces, however we found no such bias. We propose that imprecision in the estimation of average slant in curved surfaces, relative to planar surfaces, makes them more susceptible to the commonly reported frontoparallel bias (slant underestimation). This bias may counteract the predicted overestimation of average slant in convex surfaces. Taken together, our modelling and psychophysical results indicate that curvature modulates the pattern of foreshortening of globally slanted surfaces, which biases the estimation of average slant. This, in turn, may lead to systematic errors in our interaction with curved surfaces.\n},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2021-09-06 09:10:13 -0400},\n\tdate-modified = {2021-09-07 10:36:38 -0400},\n\tdoi = {10.1167/jov.21.9.2011},\n\tkeywords = {Stereopsis},\n\tpages = {2011},\n\ttitle = {The perception of average slant is biased in concave surfaces},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1167/jov.21.9.2011}}\n\n
\n
\n\n\n
\n While much is known about our perception of surface slant for planar surfaces, less attention has been paid to our ability to estimate the average slant of curved surfaces. The average slant across a surface with symmetric curvature (a parabolic surface) and globally slanted about its axis of symmetry is equivalent to that of a planar surface slanted by the same degree. Therefore, if symmetrically curved surfaces are perceived accurately, observers' estimates of their average surface slant should be the same as for an equivalently slanted planar surface. Here we evaluated this prediction using a 2-alternative forced choice slant discrimination task. Observers (n=10) viewed a standard 15$\\,^{∘}$ (top-away) slanted planar surface and a comparison surface that varied in slant between 7.5$\\,^{∘}$ and 22.5$\\,^{∘}$; both were presented stereoscopically and textured with a Voronoi pattern. In separate conditions, the comparison surface was either planar, or parabolically curved (peak displacement = 0.15m) about its axis of rotation in a concave or convex direction. Observers consistently underestimated the average slant of the concave comparison surface relative to the planar surface. This bias is predicted by the effect of curvature modulating the degree of foreshortening in the perspective projection of a slanted surface. Perspective projection also predicts overestimation of average slant in convex surfaces, however we found no such bias. We propose that imprecision in the estimation of average slant in curved surfaces, relative to planar surfaces, makes them more susceptible to the commonly reported frontoparallel bias (slant underestimation). This bias may counteract the predicted overestimation of average slant in convex surfaces. Taken together, our modelling and psychophysical results indicate that curvature modulates the pattern of foreshortening of globally slanted surfaces, which biases the estimation of average slant. This, in turn, may lead to systematic errors in our interaction with curved surfaces. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Differences in virtual and physical head orientation predict sickness during head-mounted display based virtual reality.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R. S., & Kim, J.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 21, pages 1966. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Differences-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2021sp,\n\tabstract = {When we rotate our heads during head-mounted display (HMD) based virtual reality (VR), our virtual head tends to trail its true orientation (due to display lag). However, the exact differences in our virtual and physical head pose (DVP) vary throughout the movement. We recently proposed that large amplitude, time varying patterns of DVP were the primary trigger for cybersickness in active HMD VR. This study tests the DVP hypothesis by measuring the sickness, and estimating the DVP, produced by head rotations under different levels of imposed display lag (from 0 to 200 ms). On each trial, users made continuous, oscillatory head movements in either yaw, pitch or roll while seated inside a large virtual room. After, we used the level of imposed display lag for the condition, and the user's own tracked head-motion data, to estimate their DVP time series data for each trial. Irrespective of the axis or the speed of the head movement, we found that DVP reliably predicted our participants experiences of cybersickness. Significant positive linear relationships were found between the severity of their sickness and the mean, peak and standard deviation of this DVP data. Thus, our DVP hypothesis appears to offer significant advantages over existing (general) theories of motion sickness in terms of understanding user experiences in HMD VR. Instead of merely speculating about the presence, or degree, of sensory conflict in a particular simulation, DVP can be used to estimate the conflict produced by the active HMD VR. Importantly, this DVP is an objective measure of the stimulation (not an internal model of the user's sensory processing). Compared to its many competitors, DVP also appears to provide a simpler operational definition of the provocative stimulation for cybersickness (since it is focussed only on movements of the head; not the body or limbs).},\n\tauthor = {Palmisano, S.A. and Allison, R. S. and Kim, J.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2021-09-06 09:10:13 -0400},\n\tdate-modified = {2021-09-06 09:11:08 -0400},\n\tdoi = {10.1167/jov.21.9.1966},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {1966},\n\ttitle = {Differences in virtual and physical head orientation predict sickness during head-mounted display based virtual reality},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1167/jov.21.9.1966}}\n\n
\n
\n\n\n
\n When we rotate our heads during head-mounted display (HMD) based virtual reality (VR), our virtual head tends to trail its true orientation (due to display lag). However, the exact differences in our virtual and physical head pose (DVP) vary throughout the movement. We recently proposed that large amplitude, time varying patterns of DVP were the primary trigger for cybersickness in active HMD VR. This study tests the DVP hypothesis by measuring the sickness, and estimating the DVP, produced by head rotations under different levels of imposed display lag (from 0 to 200 ms). On each trial, users made continuous, oscillatory head movements in either yaw, pitch or roll while seated inside a large virtual room. After, we used the level of imposed display lag for the condition, and the user's own tracked head-motion data, to estimate their DVP time series data for each trial. Irrespective of the axis or the speed of the head movement, we found that DVP reliably predicted our participants experiences of cybersickness. Significant positive linear relationships were found between the severity of their sickness and the mean, peak and standard deviation of this DVP data. Thus, our DVP hypothesis appears to offer significant advantages over existing (general) theories of motion sickness in terms of understanding user experiences in HMD VR. Instead of merely speculating about the presence, or degree, of sensory conflict in a particular simulation, DVP can be used to estimate the conflict produced by the active HMD VR. Importantly, this DVP is an objective measure of the stimulation (not an internal model of the user's sensory processing). Compared to its many competitors, DVP also appears to provide a simpler operational definition of the provocative stimulation for cybersickness (since it is focussed only on movements of the head; not the body or limbs).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth perception from successive occlusion.\n \n \n \n \n\n\n \n Lee, A. R. I., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 21, pages 1963. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Depth-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Lee:2021aa,\n\tabstract = {Occlusion of one object by another is one of the strongest and best-known pictorial cues to depth. However, it has been suggested that, in addition to a cumulative sense of depth, successive occlusions of previous objects by newly presented objects can give rise to illusory motion in depth (Engel, Remus \\& Sainath, 2006). Engel and colleagues (2006) found that a stacking disk stimulus, where each disk occludes a previous disk in a pile, generates a strong sensation of the stack moving towards the observer. While the perceived motion associated with this illusion has been studied, the resultant depth percept has not. To investigate if the successive introduction of occluding objects affected the perceived depth of a stacked disk stimulus, we compared two conditions. In one, participants were presented with two static piles of disks, while in the other, participants viewed one static and one stacking pile of disks. In both conditions, we presented 20 disks in one pile and a range of disks in the other using a method of constant stimuli. Participants indicated which pile appeared taller. The proportion of `taller' responses were fit with cumulative normal psychometric functions from which we calculated points of subjective equality for the number of disks in each pile. We found static piles with the same number of disks appeared approximately equal in height. In contrast, the successive presentation of disks in the stacking condition appeared to enhance the perceived height of the stack - fewer disks were needed to match the static pile. Surprisingly, we also found just-noticeable differences varied between conditions: the task was easier when participants compared stacking vs. static piles of disks. Our results suggest that successive occlusions generate a greater sense of height than occlusion alone, and that dynamic occlusion may be an underappreciated source of depth information.},\n\tauthor = {Abigail R. I. Lee and Robert S. Allison and Laurie M. Wilcox},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2021-09-06 08:58:21 -0400},\n\tdate-modified = {2021-09-06 08:59:56 -0400},\n\tdoi = {10.1167/jov.21.9.1963},\n\tkeywords = {Stereopsis},\n\tpages = {1963},\n\ttitle = {Depth perception from successive occlusion},\n\tvolume = {21},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1167/jov.21.9.1963}}\n\n
\n
\n\n\n
\n Occlusion of one object by another is one of the strongest and best-known pictorial cues to depth. However, it has been suggested that, in addition to a cumulative sense of depth, successive occlusions of previous objects by newly presented objects can give rise to illusory motion in depth (Engel, Remus & Sainath, 2006). Engel and colleagues (2006) found that a stacking disk stimulus, where each disk occludes a previous disk in a pile, generates a strong sensation of the stack moving towards the observer. While the perceived motion associated with this illusion has been studied, the resultant depth percept has not. To investigate if the successive introduction of occluding objects affected the perceived depth of a stacked disk stimulus, we compared two conditions. In one, participants were presented with two static piles of disks, while in the other, participants viewed one static and one stacking pile of disks. In both conditions, we presented 20 disks in one pile and a range of disks in the other using a method of constant stimuli. Participants indicated which pile appeared taller. The proportion of `taller' responses were fit with cumulative normal psychometric functions from which we calculated points of subjective equality for the number of disks in each pile. We found static piles with the same number of disks appeared approximately equal in height. In contrast, the successive presentation of disks in the stacking condition appeared to enhance the perceived height of the stack - fewer disks were needed to match the static pile. Surprisingly, we also found just-noticeable differences varied between conditions: the task was easier when participants compared stacking vs. static piles of disks. Our results suggest that successive occlusions generate a greater sense of height than occlusion alone, and that dynamic occlusion may be an underappreciated source of depth information.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Why do we get sick during HMD-based Virtual Reality?.\n \n \n \n\n\n \n Palmisano, S., Allison, R. S., & Kim, J.\n\n\n \n\n\n\n In IEEE VR 2021 Workshop on Immersive Sickness Prevention (WISP), 2021. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Palmisano:2021bs,\n\tauthor = {Palmisano, S.A. and Allison, R. S. and Kim, J.},\n\tbooktitle = {IEEE VR 2021 Workshop on Immersive Sickness Prevention (WISP)},\n\tdate-added = {2023-03-21 17:31:46 -0400},\n\tdate-modified = {2023-03-21 17:31:58 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Why do we get sick during HMD-based Virtual Reality?},\n\tyear = {2021}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subjective Quality Assessment of VESA Display Stream Compression Codecs.\n \n \n \n \n\n\n \n Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In 21st International Meeting on Information Display, IMID 2021 Digest, pages 29, Seoul, Korea, 08 2021. \n \n\n\n\n
\n\n\n\n \n \n \"SubjectivePaper\n  \n \n \n \"Subjective-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Allison:2021ss,\n\tabstract = {VESA DSC and VDC-M (https://vesa.org/vesa-display-compression-codecs/) are in widespread usage in millions of display systems. This rollout was preceded by extensive and targeted subjective quality assessment to validate predictions of codec quality and visually lossless behaviour. In this talk, we will overview the assessment activities to date and their extension to applications in immersive displays. Our focus will be on subjective testing at York University using the ISO 29170-2 Appendix A protocol (1).\nIn the ISO 29170-2 `flicker paradigm', the test and reference are presented side-by-side on the display (Figure 1). The test consists of the compressed image temporally interleaved (alternating) with the uncompressed version at a fixed frequency (typically 5 Hz). In the reference sequence, the uncompressed image alternates with itself. Participants view the test and reference sequences side by side and are asked to identify the compressed image (i.e., which image sequence contained flicker). We have also developed and implemented modified versions of the protocol to evaluate moving and stereoscopic displays. This testing has proceeded in discrete stages including:\n* Validation of visually lossless performance in a wide range of representative image samples\n* Confirmation of visually lossless performance in chroma subsampled images and moving content\n* Assessment of compression performance with high-dynamic range content\n* Assessment of compression performance with stereoscopic 3D content\n* Assessment of the effects of chromatic aberration correction on codec performance\nTesting has focused on challenging test cases to optimize the effort and benefit of time-consuming subjective assessment studies. Generally, both DSC and VDC-M have met expectations for visually lossless performance over a wide variety of content and use cases. Flicker testing is a highly conservative test procedure and codec performance in real world scenarios is expected to exceed that found under the harsher conditions of flicker testing. \n},\n\taddress = {Seoul, Korea},\n\tannote = { IMID 2021, which will be held at COEX in Seoul, Korea from August 25 to 27, 2021, },\n\tauthor = {Robert S. Allison and Laurie M. Wilcox},\n\tbooktitle = {21st International Meeting on Information Display, IMID 2021 Digest},\n\tdate-added = {2021-09-06 09:27:35 -0400},\n\tdate-modified = {2021-09-16 08:50:08 -0400},\n\tkeywords = {Stereopsis},\n\tmonth = {08},\n\tpages = {29},\n\ttitle = {Subjective Quality Assessment of VESA Display Stream Compression Codecs},\n\turl = {https://upload.congkong.net/imid2021/imid2021-e-proceedings.pdf},\n\tyear = {2021},\n\turl-1 = {https://upload.congkong.net/imid2021/imid2021-e-proceedings.pdf}}\n\n
\n
\n\n\n
\n VESA DSC and VDC-M (https://vesa.org/vesa-display-compression-codecs/) are in widespread usage in millions of display systems. This rollout was preceded by extensive and targeted subjective quality assessment to validate predictions of codec quality and visually lossless behaviour. In this talk, we will overview the assessment activities to date and their extension to applications in immersive displays. Our focus will be on subjective testing at York University using the ISO 29170-2 Appendix A protocol (1). In the ISO 29170-2 `flicker paradigm', the test and reference are presented side-by-side on the display (Figure 1). The test consists of the compressed image temporally interleaved (alternating) with the uncompressed version at a fixed frequency (typically 5 Hz). In the reference sequence, the uncompressed image alternates with itself. Participants view the test and reference sequences side by side and are asked to identify the compressed image (i.e., which image sequence contained flicker). We have also developed and implemented modified versions of the protocol to evaluate moving and stereoscopic displays. This testing has proceeded in discrete stages including: * Validation of visually lossless performance in a wide range of representative image samples * Confirmation of visually lossless performance in chroma subsampled images and moving content * Assessment of compression performance with high-dynamic range content * Assessment of compression performance with stereoscopic 3D content * Assessment of the effects of chromatic aberration correction on codec performance Testing has focused on challenging test cases to optimize the effort and benefit of time-consuming subjective assessment studies. Generally, both DSC and VDC-M have met expectations for visually lossless performance over a wide variety of content and use cases. Flicker testing is a highly conservative test procedure and codec performance in real world scenarios is expected to exceed that found under the harsher conditions of flicker testing. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sensitivity of VESA Display Stream Compression Codecs to Chromatic Aberration.\n \n \n \n\n\n \n Au, D., Mohona, S. S., Hou, Y., Kio, O. G., Goel, J., Jacobson, N., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In 21st International Meeting on Information Display, IMID 2021 Digest, pages 32, Seoul. Korea, 08 2021. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Au:2021mb,\n\taddress = {Seoul. Korea},\n\tannote = { IMID 2021, which will be held at COEX in Seoul, Korea from August 25 to 27, 2021, },\n\tauthor = {Domenic Au and Sanjida Sharmin Mohona and Yuqian Hou and Onoise Gerald Kio and James Goel and Natan Jacobson and Robert S. Allison and Laurie M. Wilcox},\n\tbooktitle = {21st International Meeting on Information Display, IMID 2021 Digest},\n\tdate-added = {2021-09-06 09:27:35 -0400},\n\tdate-modified = {2021-09-06 09:27:35 -0400},\n\tkeywords = {Stereopsis},\n\tmonth = {08},\n\tpages = {32},\n\ttitle = {Sensitivity of VESA Display Stream Compression Codecs to Chromatic Aberration},\n\tyear = {2021}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ArtScience and the ICECUBE LED Display.\n \n \n \n \n\n\n \n Hosale, M. D., Allison, R. S., Madsen, J., & Gordon, M.\n\n\n \n\n\n\n In Proceedings of the 29th ACM International Conference on Multimedia, pages 3720–3727, Chengdu, China, 2021. \n \n\n\n\n
\n\n\n\n \n \n \"ArtScience-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Hosale:2021ss,\n\taddress = {Chengdu, China},\n\tannote = {Multimedia '21: ACM Multimedia 2021, October 20--24, 2021,},\n\tauthor = {Hosale, M. D. and Allison, R. S. and Madsen, J. and Gordon, M.},\n\tbooktitle = {Proceedings of the 29th ACM International Conference on Multimedia},\n\tdate-added = {2021-07-14 22:07:57 -0400},\n\tdate-modified = {2022-08-17 16:26:20 -0400},\n\tdoi = {10.1145/3474085.3475524},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {3720--3727},\n\ttitle = {ArtScience and the ICECUBE LED Display},\n\tyear = {2021},\n\turl-1 = {https://doi.org/10.1145/3474085.3475524}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Validating the Use of Compression for Automotive Displays.\n \n \n \n\n\n \n Goel, J., Stolitzka, D., Smith, I., Jacobson, N., Legault, A., Wiley, K., Wietfeld, R., Wiesner, C., Yee, K., Allison, R., Wiley, C., Wilcox, L., Kerley, S., Au, D., & Cole, M.\n\n\n \n\n\n\n Technical Report MIPI Alliance, 2021.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Goel:2021aa,\n\tabstract = {Use of image compression is essential to address the proliferation of high-performance displays in next\ngeneration vehicles. This paper details the trends impacting automotive display design and describes a\nnew MIPI Display Working Group (DWG) study that verifies how the use of Video Electronics Standards\nAssociation Display Compression-M (VDC-M) within the MIPI Display Serial Interface 2 (DSI-2SM)\nprotocol can provide visually lossless compression for automotive displays.},\n\tauthor = {James Goel and Dale Stolitzka and Ian Smith and Natan Jacobson and Alain Legault and Kendra Wiley and Rick Wietfeld and Chris Wiesner and Kevin Yee and Robert Allison and Craig Wiley and Laurie Wilcox and Sharmion Kerley and Domenic Au and Melanie Cole},\n\tdate-added = {2021-04-03 08:10:40 -0400},\n\tdate-modified = {2021-04-03 08:14:36 -0400},\n\tinstitution = {MIPI Alliance},\n\tkeywords = {Image Quality},\n\ttitle = {Validating the Use of Compression for Automotive Displays},\n\tyear = {2021}}\n\n
\n
\n\n\n
\n Use of image compression is essential to address the proliferation of high-performance displays in next generation vehicles. This paper details the trends impacting automotive display design and describes a new MIPI Display Working Group (DWG) study that verifies how the use of Video Electronics Standards Association Display Compression-M (VDC-M) within the MIPI Display Serial Interface 2 (DSI-2SM) protocol can provide visually lossless compression for automotive displays.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Perceiving self-motion in a field of jittering lollipops from ages 4 to 95.\n \n \n \n \n\n\n \n Bury, N., Jenkin, M., Allison, R. S., & Harris, L. R.\n\n\n \n\n\n\n Plos ONE, 15(10): e0241087. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Perceiving-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Bury:aa,\n\tauthor = {Bury, N. and Jenkin, M. and Allison, R. S. and Harris, L. R.},\n\tdate-added = {2020-10-09 08:40:15 -0400},\n\tdate-modified = {2020-10-23 15:59:39 -0400},\n\tdoi = {10.1371/journal.pone.0241087},\n\tjournal = {Plos ONE},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {10},\n\tpages = {e0241087},\n\ttitle = {Perceiving self-motion in a field of jittering lollipops from ages 4 to 95},\n\tvolume = {15},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1371/journal.pone.0241087}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cybersickness in Head-Mounted Displays is Caused by Differences in the User's Virtual and Physical Head Pose.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R. S., & Kim, J.\n\n\n \n\n\n\n Frontiers in Virtual Reality, 1: Article 587698. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Cybersickness-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:ab,\n\tabstract = {Sensory conflict, eye-movement, and postural instability theories each have difficulty\naccounting for the motion sickness experienced during head-mounted display based\nvirtual reality (HMD VR). In this paper we review the limitations of existing theories in\nexplaining cybersickness and propose a practical alternative approach. We start by\nproviding a clear operational definition of provocative motion stimulation during active\nHMD VR. In this situation, whenever the user makes a head movement, his/her virtual\nhead will tend to trail its true position and orientation due to the display lag (or motion to\nphoton latency). Importantly, these differences in virtual and physical head pose (DVP)\nwill vary over time. Based on our own research findings, we propose that cybersickness\nin HMD VR is triggered by large magnitude, time-varying patterns of DVP. We then\nshow how this hypothesis can be tested by: (1) systematically manipulating display lag\nmagnitudes and head movement speeds across HMD VR conditions; and (2) comparing\nthe user's estimates of DVP and cybersickness produced in each of these conditions.\nWe believe that this approach will allow researchers to precisely predict which situations\nwill (and will not) be provocative for cybersickness in HMD VR.},\n\tannote = {Citation:\nPalmisano S, Allison RS and Kim J\n(2020) Cybersickness in\nHead-Mounted Displays Is Caused by\nDifferences in the User's Virtual and\nPhysical Head Pose.\nFront. Virtual Real. 1:587698.\ndoi: 10.3389/frvir.2020.587698},\n\tauthor = {Palmisano, S. and Allison, R. S. and Kim, J.},\n\tdate-added = {2020-10-02 08:41:49 -0400},\n\tdate-modified = {2020-10-23 19:07:21 -0400},\n\tdoi = {10.3389/frvir.2020.587698},\n\tjournal = {Frontiers in Virtual Reality},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {Article 587698},\n\ttitle = {Cybersickness in Head-Mounted Displays is Caused by Differences in the User's Virtual and Physical Head Pose},\n\tvolume = {1},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.3389/frvir.2020.587698}}\n\n
\n
\n\n\n
\n Sensory conflict, eye-movement, and postural instability theories each have difficulty accounting for the motion sickness experienced during head-mounted display based virtual reality (HMD VR). In this paper we review the limitations of existing theories in explaining cybersickness and propose a practical alternative approach. We start by providing a clear operational definition of provocative motion stimulation during active HMD VR. In this situation, whenever the user makes a head movement, his/her virtual head will tend to trail its true position and orientation due to the display lag (or motion to photon latency). Importantly, these differences in virtual and physical head pose (DVP) will vary over time. Based on our own research findings, we propose that cybersickness in HMD VR is triggered by large magnitude, time-varying patterns of DVP. We then show how this hypothesis can be tested by: (1) systematically manipulating display lag magnitudes and head movement speeds across HMD VR conditions; and (2) comparing the user's estimates of DVP and cybersickness produced in each of these conditions. We believe that this approach will allow researchers to precisely predict which situations will (and will not) be provocative for cybersickness in HMD VR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparing Head Gesture, Hand Gesture and Gamepad Interfaces for Answering Yes/No Questions in Virtual Environments.\n \n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n Virtual Reality, 24(515-524). 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ComparingPaper\n  \n \n \n \"Comparing-1\n  \n \n \n \"Comparing-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Zhao:2019fj,\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tdate-added = {2019-11-25 17:52:08 -0500},\n\tdate-modified = {2020-09-29 11:22:35 -0400},\n\tdoi = {10.1007/s10055-019-00416-7},\n\tjournal = {Virtual Reality},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {515-524},\n\ttitle = {Comparing Head Gesture, Hand Gesture and Gamepad Interfaces for Answering Yes/No Questions in Virtual Environments},\n\turl = {http://link.springer.com/article/10.1007/s10055-019-00416-7},\n\turl-1 = {http://link.springer.com/article/10.1007/s10055-019-00416-7},\n\tvolume = {24},\n\tyear = {2020},\n\turl-1 = {http://link.springer.com/article/10.1007/s10055-019-00416-7},\n\turl-2 = {https://doi.org/10.1007/s10055-019-00416-7}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Stereoscopic Advantage for Vection Persists Despite Reversed Disparity.\n \n \n \n \n\n\n \n Palmisano, S., Nakamura, S., Allison, R. S., & Riecke, B.\n\n\n \n\n\n\n Attention, Perception and Psychophysics, 82: 2098–2118. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Palmisano:aa,\n\tauthor = {Palmisano, S.A. and Nakamura, S. and Allison, R. S. and Riecke, B.},\n\tdate-added = {2019-09-05 09:21:06 -0400},\n\tdate-modified = {2020-06-17 08:24:37 -0400},\n\tdoi = {10.3758/s13414-019-01886-2},\n\tjournal = {Attention, Perception and Psychophysics},\n\tkeywords = {Stereopsis, Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {2098--2118},\n\ttitle = {The Stereoscopic Advantage for Vection Persists Despite Reversed Disparity},\n\turl = {https://rdcu.be/b4YOT},\n\tvolume = {82},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.3758/s13414-019-01886-2}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contributions of stereopsis and aviation experience to simulated rotary wing altitude estimation.\n \n \n \n \n\n\n \n Hartle, B., Sudhama, A., Deas, L. M., Allison, R. S., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n Human Factors : The Journal of the Human Factors and Ergonomics Society, 62(5): 812-824. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Contributions-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Hartle:pb,\n\tabstract = {Objective: We examined the contribution of binocular vision and experience to performance on a simulated helicopter flight task. \n\nBackground: Although there is a long history of research on the role of binocular vision and stereopsis in aviation, there is no consensus on its operational relevance. This work addresses this using a naturalistic task in a virtual environment.\n\nMethod: Four high-resolution stereoscopic terrain types were viewed monocularly and binocularly. In separate experiments, we evaluated performance of undergraduate students and military aircrew on a simulated low hover altitude judgment task. Observers were asked to judge the distance between a virtual helicopter skid and the ground plane.\n\nResults: Our results show that for both groups, altitude judgments are more accurate in the binocular viewing condition than in the monocular condition. However, in the monocular condition, aircrew were more accurate than undergraduate observers in estimating height of the skid above the ground.\n\nConclusion: At simulated altitudes of 5 ft (1.5 m) or less, binocular vision provides a significant advantage for estimation of the depth separation between the landing skid and the ground, regardless of relevant operational experience. However, when binocular cues are unavailable aircrew outperform undergraduate observers, a result that likely reflects the impact of training on the ability to interpret monocular depth cues.},\n\tauthor = {Hartle, B. and Sudhama, Aishwarya and Deas, Lesley M. and Allison, Robert S. and Irving, Elizabeth L. and Glaholt, Mackenzie and Wilcox, Laurie M.},\n\tdate-added = {2019-06-08 18:33:06 -0400},\n\tdate-modified = {2020-09-27 17:15:09 -0400},\n\tdoi = {10.1177/0018720819853479},\n\tjournal = {Human Factors : The Journal of the Human Factors and Ergonomics Society},\n\tkeywords = {Stereopsis},\n\tnumber = {5},\n\tpages = {812-824},\n\ttitle = {Contributions of stereopsis and aviation experience to simulated rotary wing altitude estimation},\n\turl-1 = {https://doi.org/10.1177/0018720819853479},\n\tvolume = {62},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1177/0018720819853479}}\n\n
\n
\n\n\n
\n Objective: We examined the contribution of binocular vision and experience to performance on a simulated helicopter flight task. Background: Although there is a long history of research on the role of binocular vision and stereopsis in aviation, there is no consensus on its operational relevance. This work addresses this using a naturalistic task in a virtual environment. Method: Four high-resolution stereoscopic terrain types were viewed monocularly and binocularly. In separate experiments, we evaluated performance of undergraduate students and military aircrew on a simulated low hover altitude judgment task. Observers were asked to judge the distance between a virtual helicopter skid and the ground plane. Results: Our results show that for both groups, altitude judgments are more accurate in the binocular viewing condition than in the monocular condition. However, in the monocular condition, aircrew were more accurate than undergraduate observers in estimating height of the skid above the ground. Conclusion: At simulated altitudes of 5 ft (1.5 m) or less, binocular vision provides a significant advantage for estimation of the depth separation between the landing skid and the ground, regardless of relevant operational experience. However, when binocular cues are unavailable aircrew outperform undergraduate observers, a result that likely reflects the impact of training on the ability to interpret monocular depth cues.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Modvis: Modeling biases of perceived slant in curved surfaces.\n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In ModVis 2020. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2020aa,\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {ModVis 2020},\n\tdate-added = {2023-03-21 17:30:42 -0400},\n\tdate-modified = {2023-03-21 17:31:11 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Modvis: Modeling biases of perceived slant in curved surfaces},\n\tyear = {2020}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sex/gender differences in the perception of distance and self-motion.\n \n \n \n \n\n\n \n Jörges, B., Bury, N., McManus, M., Allison, R., Jenkin, M., & Harris, L.\n\n\n \n\n\n\n In 7th International Symposium on Visually-Induced Motion Sensations, VIMS 2020. Hong Kong, 12 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Sex/genderPaper\n  \n \n \n \"Sex/gender-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jorges:2021aa,\n\taddress = {Hong Kong},\n\tauthor = {Bj{\\"o}rn J{\\"o}rges and Nils Bury and Meaghan McManus and Robert Allison and Michael Jenkin and Laurence Harris},\n\tbooktitle = {7th International Symposium on Visually-Induced Motion Sensations, VIMS 2020},\n\tdate-added = {2021-09-06 10:22:41 -0400},\n\tdate-modified = {2021-09-06 10:22:41 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {12},\n\ttitle = {Sex/gender differences in the perception of distance and self-motion},\n\turl = {https://ieda.ust.hk/dfaculty/so/VIMS2020/},\n\tyear = {2020},\n\turl-1 = {https://ieda.ust.hk/dfaculty/so/VIMS2020/}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The perception of visually simulated self-motion is altered by body posture.\n \n \n \n \n\n\n \n Jörges, B., Bury, N., MacManus, M., Allison, R. S., Jenkin, M., & Harris, L. R.\n\n\n \n\n\n\n In 3rd Interdisciplinary Navigation (iNAV2020) Symposium Proceedings, pages 64. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jorges:aa,\n\tabstract = {The perception of visually simulated self-motion is altered by body posture Author(s) and affiliation(s): Bj{\\"o}rn J{\\"o}rges, Nils Bury, Meaghan McManus, Robert Allison, Michael Jenkin, Laurence R. Harris Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada \n\nThe perception of self-motion is a multisensory process involving visual and vestibular cues, among others. Visual cues may become more important in visual-vestibular tasks when vestibular cues are attenuated, for example in determining the perceptual upright while lying supine[1]. We tested whether this effect might generalize to self-motion perception, where a higher effectiveness of visual cues should lead to an overestimation of traveled distance. We immersed participants in a virtual hallway and showed them targets at different distances ahead of them. The targets disappeared and participants experienced optic flow simulating straight-ahead self-motion. They indicated by button press when they felt they had reached the position of the target previously viewed. Participants also performed a control task to assess biases in depth perception. We showed them virtual boxes at different distances and they judged on each trial if the height of the box was bigger or smaller than a ruler in their hands. Perceived distance can be deduced from biases in perceived size. They performed both tasks sitting upright and lying supine. For the main task, we found that participants needed less optic flow to perceive they had reached the target's position when supine than when sitting (by 4.4\\%, 95\\% CI=[2.9\\%;6.3\\%], using Mixed Modelling). For the control task, participants underestimated the distance slightly less when supine (by 2.5\\%, 95\\% CI = [0.05\\%;5.00\\%], as above). When supine, participants needed to travel less far compared to sitting, even though they overestimated distance while supine versus sitting. The bias in traveled distance can thus not be reduced to a bias in perceived distance. Our experiment provides evidence that visual information is more important for the perception of self-motion when gravity is not aligned with the long body axis. We acknowledge the generous support of the Canadian Space Agency (15ILSRA1-York). [1] Dyde et al. (2006) Exp Brain Res 173:612--22 Name of corresponding author: Bjoern Joerges E-mail of corresponding author: bjoerges@yorku.ca },\n\tannote = {Oct 5-7, 2020 virtual meeting},\n\tauthor = {Bj{\\"o}rn J{\\"o}rges and Nils Bury and Meaghan MacManus and Robert S. Allison and Michael Jenkin and Laurence R. Harris},\n\tbooktitle = {3rd Interdisciplinary Navigation (iNAV2020) Symposium Proceedings},\n\tdate-added = {2020-10-27 13:50:05 -0400},\n\tdate-modified = {2020-10-27 13:50:05 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {64},\n\ttitle = {The perception of visually simulated self-motion is altered by body posture},\n\turl = {https://inavsymposium.com/wp-content/uploads/2020/10/Data_Blitz_Booklet_2020.pdf},\n\tyear = {2020},\n\turl-1 = {https://inavsymposium.com/wp-content/uploads/2020/10/Data_Blitz_Booklet_2020.pdf}}\n\n
\n
\n\n\n
\n The perception of visually simulated self-motion is altered by body posture Author(s) and affiliation(s): Björn Jörges, Nils Bury, Meaghan McManus, Robert Allison, Michael Jenkin, Laurence R. Harris Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada The perception of self-motion is a multisensory process involving visual and vestibular cues, among others. Visual cues may become more important in visual-vestibular tasks when vestibular cues are attenuated, for example in determining the perceptual upright while lying supine[1]. We tested whether this effect might generalize to self-motion perception, where a higher effectiveness of visual cues should lead to an overestimation of traveled distance. We immersed participants in a virtual hallway and showed them targets at different distances ahead of them. The targets disappeared and participants experienced optic flow simulating straight-ahead self-motion. They indicated by button press when they felt they had reached the position of the target previously viewed. Participants also performed a control task to assess biases in depth perception. We showed them virtual boxes at different distances and they judged on each trial if the height of the box was bigger or smaller than a ruler in their hands. Perceived distance can be deduced from biases in perceived size. They performed both tasks sitting upright and lying supine. For the main task, we found that participants needed less optic flow to perceive they had reached the target's position when supine than when sitting (by 4.4%, 95% CI=[2.9%;6.3%], using Mixed Modelling). For the control task, participants underestimated the distance slightly less when supine (by 2.5%, 95% CI = [0.05%;5.00%], as above). When supine, participants needed to travel less far compared to sitting, even though they overestimated distance while supine versus sitting. The bias in traveled distance can thus not be reduced to a bias in perceived distance. Our experiment provides evidence that visual information is more important for the perception of self-motion when gravity is not aligned with the long body axis. We acknowledge the generous support of the Canadian Space Agency (15ILSRA1-York). [1] Dyde et al. (2006) Exp Brain Res 173:612–22 Name of corresponding author: Bjoern Joerges E-mail of corresponding author: bjoerges@yorku.ca \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Locomotor decision-making altered by different walking interfaces in virtual reality.\n \n \n \n\n\n \n Kuo, C., & Allison, R. S.\n\n\n \n\n\n\n In Vision Sciences Society Annual Conference (Accepted but not presented because of COVID pandemic cancellation). 2020.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kuo:aa,\n\tabstract = {Walking interfaces for Virtual Reality often produce proprioceptive, vestibular and somatosensory signals which conflict with the visual presentation of terrain conditions in virtual environments. We compared locomotion decisions made using a dual joystick gamepad with a walking-in-place metaphor. Each trial presented two choices where the visual path condition differed in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. Users chose one of these paths by using the locomotion interface to walk to a goal. Their decisions were recorded and analyzed as a generalized linear mixed model. The results suggest that the walking-in-place interface produces choices of visual conditions that more often reflect expectations of walking in the real world: decisions that minimize energy expended or risk of injury. Because of this, we can infer that different walking interfaces can produce different results in virtual reality experiments. Therefore, behavioral scientists should be wary that sensory discrepancies between visual presentation and other modalities can negatively affect the ecological validity of studies using virtual reality. Consideration should be taken designing these studies to ensure that sensory inputs are as natural and consistent between modalities as possible.\n\n},\n\tauthor = {Kuo, C. and Allison, R. S.},\n\tbooktitle = {Vision Sciences Society Annual Conference (Accepted but not presented because of COVID pandemic cancellation)},\n\tdate-added = {2020-10-27 13:38:54 -0400},\n\tdate-modified = {2020-10-27 13:45:03 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Locomotor decision-making altered by different walking interfaces in virtual reality},\n\tyear = {2020}}\n\n
\n
\n\n\n
\n Walking interfaces for Virtual Reality often produce proprioceptive, vestibular and somatosensory signals which conflict with the visual presentation of terrain conditions in virtual environments. We compared locomotion decisions made using a dual joystick gamepad with a walking-in-place metaphor. Each trial presented two choices where the visual path condition differed in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. Users chose one of these paths by using the locomotion interface to walk to a goal. Their decisions were recorded and analyzed as a generalized linear mixed model. The results suggest that the walking-in-place interface produces choices of visual conditions that more often reflect expectations of walking in the real world: decisions that minimize energy expended or risk of injury. Because of this, we can infer that different walking interfaces can produce different results in virtual reality experiments. Therefore, behavioral scientists should be wary that sensory discrepancies between visual presentation and other modalities can negatively affect the ecological validity of studies using virtual reality. Consideration should be taken designing these studies to ensure that sensory inputs are as natural and consistent between modalities as possible. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The impact of motion gain on egocentric distance judgments from motion parallax.\n \n \n \n \n\n\n \n Cutone, M., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 20, pages 1426. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Cutone:aa,\n\tabstract = {For self-generated motion parallax, a sense of head velocity is needed to estimate distance from object motion. This information can be obtained from proprioceptive and visual sources. If visual and kinesthetic information are incongruent, the visual motion of objects will not match the sensed physical velocity of the head, resulting in a distortion of perceived distances. We assessed this prediction by varying the gain between physical observer head motion and the simulated motion. Given that the relative and absolute motion parallax would be greater than expected from head motion when gain was greater than 1.0, we anticipated that this manipulation would result in objects appearing closer to the observer. Using an HMD, we presented targets 1 to 3 meters away from the observer within a cue rich environment with textured walls and floors. Participants stood and swayed laterally at a rate of 0.5 Hz paced using a metronome. Lateral gain was applied by amplifying their real position by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environment. After presentation, the target disappeared and the participant performed a blind walk and reached for it. Their hand position was recorded and we computed positional errors relative to the target. We found no effect of motion parallax gain manipulation on binocular reaching accuracy. In a second study we evaluated the role of stereopsis in counteracting the anticipated distortion in perceived space by testing observers monocularly. In this case, distances were perceived as nearer as gain increased, but the effects were relatively small. Taken together our results suggest that observers are flexible in their interpretation of observer produced motion parallax during active head movement. This provides considerable tolerance of spatial perception to mismatches between physical and virtual motion in rich virtual environments.},\n\tauthor = {Cutone, M. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2020-10-27 13:09:43 -0400},\n\tdate-modified = {2020-10-27 13:09:43 -0400},\n\tdoi = {10.1167/jov.20.11.1426},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {11},\n\tpages = {1426},\n\ttitle = {The impact of motion gain on egocentric distance judgments from motion parallax},\n\tvolume = {20},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1167/jov.20.11.1426}}\n\n
\n
\n\n\n
\n For self-generated motion parallax, a sense of head velocity is needed to estimate distance from object motion. This information can be obtained from proprioceptive and visual sources. If visual and kinesthetic information are incongruent, the visual motion of objects will not match the sensed physical velocity of the head, resulting in a distortion of perceived distances. We assessed this prediction by varying the gain between physical observer head motion and the simulated motion. Given that the relative and absolute motion parallax would be greater than expected from head motion when gain was greater than 1.0, we anticipated that this manipulation would result in objects appearing closer to the observer. Using an HMD, we presented targets 1 to 3 meters away from the observer within a cue rich environment with textured walls and floors. Participants stood and swayed laterally at a rate of 0.5 Hz paced using a metronome. Lateral gain was applied by amplifying their real position by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environment. After presentation, the target disappeared and the participant performed a blind walk and reached for it. Their hand position was recorded and we computed positional errors relative to the target. We found no effect of motion parallax gain manipulation on binocular reaching accuracy. In a second study we evaluated the role of stereopsis in counteracting the anticipated distortion in perceived space by testing observers monocularly. In this case, distances were perceived as nearer as gain increased, but the effects were relatively small. Taken together our results suggest that observers are flexible in their interpretation of observer produced motion parallax during active head movement. This provides considerable tolerance of spatial perception to mismatches between physical and virtual motion in rich virtual environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling biases of perceived slant in curved surfaces.\n \n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 20, pages 561. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Modeling-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:aa,\n\tabstract = {Veridical perception of surface slant is important to everyday tasks such as traversing terrain and interacting with or placing objects on surfaces. However, natural surfaces contain higher-order depth variation, or curvature, which may impact how slant is perceived. We propose a computational model which predicts that curvature, real or distortion-induced, biases the perception of surface slant. The model is based on the perspective projection of surfaces to form ``retinal images'' containing monocular and binocular texture cues (gradients) for slant estimation. Curvature was either intrinsic to the modelled surface or induced by non-uniform magnification i.e. radial distortion (typical in wide-angle lenses and head-mounted display optics). The resulting binocular and monocular texture gradients derived from these conditions make specific predictions regarding perceived surface slant. In a series of psychophysical experiments we tested these predictions using slant discrimination and magnitude estimation tasks. Our results confirm that local slant estimation is biased in a manner consistent with apparent surface curvature. Further we show that for concave surfaces, irrespective of whether curvature is intrinsic or distortion-induced, there is a net underestimation of global surface slant. Somewhat surprisingly, we also find that the observed biases in global slant are driven largely by the texture gradients and not by the concurrent changes in binocular disparity. This is due to vertical asymmetry in texture gradients of curved surfaces with overall slant. Our results show that while there is a potentially complex interaction between surface curvature and slant perception, much of the perceptual data can be predicted by a relatively simple model based on perspective projection. The work highlights the importance of evaluating the impact of higher-order variations on perceived surface attitude, particularly in virtual environments in which curvature may be intrinsic or caused by optical distortion.},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2020-10-27 13:09:43 -0400},\n\tdate-modified = {2020-10-27 13:09:43 -0400},\n\tdoi = {10.1167/jov.20.11.561},\n\tkeywords = {Stereopsis},\n\tnumber = {11},\n\tpages = {561},\n\ttitle = {Modeling biases of perceived slant in curved surfaces},\n\tvolume = {20},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1167/jov.20.11.561}}\n\n
\n
\n\n\n
\n Veridical perception of surface slant is important to everyday tasks such as traversing terrain and interacting with or placing objects on surfaces. However, natural surfaces contain higher-order depth variation, or curvature, which may impact how slant is perceived. We propose a computational model which predicts that curvature, real or distortion-induced, biases the perception of surface slant. The model is based on the perspective projection of surfaces to form ``retinal images'' containing monocular and binocular texture cues (gradients) for slant estimation. Curvature was either intrinsic to the modelled surface or induced by non-uniform magnification i.e. radial distortion (typical in wide-angle lenses and head-mounted display optics). The resulting binocular and monocular texture gradients derived from these conditions make specific predictions regarding perceived surface slant. In a series of psychophysical experiments we tested these predictions using slant discrimination and magnitude estimation tasks. Our results confirm that local slant estimation is biased in a manner consistent with apparent surface curvature. Further we show that for concave surfaces, irrespective of whether curvature is intrinsic or distortion-induced, there is a net underestimation of global surface slant. Somewhat surprisingly, we also find that the observed biases in global slant are driven largely by the texture gradients and not by the concurrent changes in binocular disparity. This is due to vertical asymmetry in texture gradients of curved surfaces with overall slant. Our results show that while there is a potentially complex interaction between surface curvature and slant perception, much of the perceptual data can be predicted by a relatively simple model based on perspective projection. The work highlights the importance of evaluating the impact of higher-order variations on perceived surface attitude, particularly in virtual environments in which curvature may be intrinsic or caused by optical distortion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pseudoscopic vection: Reversing stereo continues to improve self-motion perception despite increased conflict.\n \n \n \n \n\n\n \n Palmisano, S., Nakamura, S., Allison, R., & Riecke, B.\n\n\n \n\n\n\n In Journal of Vision (Vision Sciences Society Abstracts), volume 20, pages 339. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Pseudoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2020df,\n\tabstract = {Research has shown that visual illusions of self-motion (vection) can be improved by adding consistent stereoscopic information to inducing displays. However here we examined the effect of placing this stereoscopic information into direct conflict with monocular motion signals (by swapping left and right eye views to reverse disparity). We compared the vection in depth induced by stereo-consistent, stereo-reversed and flat-stereo displays. We also manipulated the amount of monocular self-motion information in these inducing displays (by providing explicit changing-size cues in half of the trials). As expected, we found that stereo-consistent conditions improved the vection induced by both changing-size and same-size patterns of optic flow (relative to their equivalent flat-stereo conditions). However, stereo-reversed conditions were also found to improve the vection induced by same-size patterns of optic flow. Additional evidence from our experiments suggested that all of these stereoscopic advantages for vection were due to the effects on perceived motion-in-depth (not perceived scene depth). These findings demonstrate that stereoscopic information does not need to be consistent with monocular motion signals in order to improve vection in depth. Rather they suggest that stereoscopic information only needs to be dynamic (as opposed to static) in order to enhance the experiences of vection induced by optic flow.},\n\tannote = {VSS 2020},\n\tauthor = {Stephen Palmisano and Shinji Nakamura and Robert Allison and Bernhard Riecke},\n\tbooktitle = {Journal of Vision (Vision Sciences Society Abstracts)},\n\tdate-added = {2020-10-27 13:09:43 -0400},\n\tdate-modified = {2020-10-27 13:09:59 -0400},\n\tdoi = {10.1167/jov.20.11.339},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {11},\n\tpages = {339},\n\ttitle = {Pseudoscopic vection: Reversing stereo continues to improve self-motion perception despite increased conflict.},\n\tvolume = {20},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1167/jov.20.11.339}}\n\n
\n
\n\n\n
\n Research has shown that visual illusions of self-motion (vection) can be improved by adding consistent stereoscopic information to inducing displays. However here we examined the effect of placing this stereoscopic information into direct conflict with monocular motion signals (by swapping left and right eye views to reverse disparity). We compared the vection in depth induced by stereo-consistent, stereo-reversed and flat-stereo displays. We also manipulated the amount of monocular self-motion information in these inducing displays (by providing explicit changing-size cues in half of the trials). As expected, we found that stereo-consistent conditions improved the vection induced by both changing-size and same-size patterns of optic flow (relative to their equivalent flat-stereo conditions). However, stereo-reversed conditions were also found to improve the vection induced by same-size patterns of optic flow. Additional evidence from our experiments suggested that all of these stereoscopic advantages for vection were due to the effects on perceived motion-in-depth (not perceived scene depth). These findings demonstrate that stereoscopic information does not need to be consistent with monocular motion signals in order to improve vection in depth. Rather they suggest that stereoscopic information only needs to be dynamic (as opposed to static) in order to enhance the experiences of vection induced by optic flow.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of simulated head motion and saccade direction on sensitivity to transsaccadic image motion.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R. S.\n\n\n \n\n\n\n In Vestibular Oriented Research Meeting, Journal of Vestibular Research, volume 30, pages 142. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Keyvanara:2020di,\n\tabstract = {Saccadic suppression of image displacement (SSD) is a perceptual feature of our visual system that oc-curs  when  we  move  our  gaze  from  one  fi xation to another.SSD has mostly been studied with the head fi xed. Normally when we move about we move our head as well as our eyes, although in virtual reality the virtual head movements may not correspond to the  physical  head  movements  producing  a  confl ict between  vision  and  the  vestibular  sense.  Here  we  investigated  the  SSD  effect  during  simulated  head  movements. Participants' eyes were tracked as they viewed  a  set  of  3D  scenes  with  a  constant  (right-ward)  camera  pan.  They  produced  a  horizontal  (rightward) saccade upon displacement of an object in  the  scene,  during  which  a  sudden  shift  of  the  scene occurred in one of 10 different directions. Us-ing  a  Bayesian  adaptive  procedure,  we  estimate  thresholds  for  detection  of  these  sudden  camera  movements.  Within-subjects  analysis  showed  that  when users made horizontal saccades, the horizontal image translations were signifi cantly less detectable than vertical image translations and also less notice-able than and in-depth translations. Likewise, hori-zontal transsaccadic rotations were signifi cantly less detectable  than  vertical  image  rotations.  These  re-sults imply that in 3D virtual environment, when us-ers   pan   their   head   while   making   a   horizontal   saccade, they would be less susceptible to noticing horizontal changes to their viewpoint that occur dur-ing  a  saccade  compared  to  vertical  or  in-depth  changes. We are currently extending these studies to measure SSD during actual head motions in immer-sive VR, allowing us to assess the contributions of the visual, vestibular and proprioceptive senses. The interaction  between  head  motion,  eye  movement  and  suppression  of  graphical  updates  during  sac-cades can provide insight into designing better VR experiences.\n},\n\tauthor = {Keyvanara, M. and Allison, R. S.},\n\tbooktitle = {Vestibular Oriented Research Meeting, Journal of Vestibular Research},\n\tdate-added = {2020-07-07 13:46:56 -0400},\n\tdate-modified = {2020-07-07 13:48:02 -0400},\n\tdoi = {10.3233/VES-200699},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {142},\n\ttitle = {Effects of simulated head motion and saccade direction on sensitivity to transsaccadic image motion},\n\tvolume = {30},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.3233/VES-200699}}\n\n
\n
\n\n\n
\n Saccadic suppression of image displacement (SSD) is a perceptual feature of our visual system that oc-curs when we move our gaze from one fi xation to another.SSD has mostly been studied with the head fi xed. Normally when we move about we move our head as well as our eyes, although in virtual reality the virtual head movements may not correspond to the physical head movements producing a confl ict between vision and the vestibular sense. Here we investigated the SSD effect during simulated head movements. Participants' eyes were tracked as they viewed a set of 3D scenes with a constant (right-ward) camera pan. They produced a horizontal (rightward) saccade upon displacement of an object in the scene, during which a sudden shift of the scene occurred in one of 10 different directions. Us-ing a Bayesian adaptive procedure, we estimate thresholds for detection of these sudden camera movements. Within-subjects analysis showed that when users made horizontal saccades, the horizontal image translations were signifi cantly less detectable than vertical image translations and also less notice-able than and in-depth translations. Likewise, hori-zontal transsaccadic rotations were signifi cantly less detectable than vertical image rotations. These re-sults imply that in 3D virtual environment, when us-ers pan their head while making a horizontal saccade, they would be less susceptible to noticing horizontal changes to their viewpoint that occur dur-ing a saccade compared to vertical or in-depth changes. We are currently extending these studies to measure SSD during actual head motions in immer-sive VR, allowing us to assess the contributions of the visual, vestibular and proprioceptive senses. The interaction between head motion, eye movement and suppression of graphical updates during sac-cades can provide insight into designing better VR experiences. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distance perception when real and virtual head motion do not match.\n \n \n \n \n\n\n \n Cutone, M., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Vestibular Oriented Research Meeting, Journal of Vestibular Research, volume 30, pages 139. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Distance-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Cutone:wb,\n\tabstract = {For self-generated motion parallax, a sense of head velocity  is  needed  to  estimate  distance  from  object  motion   (1).   This   information   can   be   obtained   from  vestibular,  proprioceptive,  and  visual  sourc-es. If the magnitude of efferent signals from the ves-tibular  system  produced  by  head  motion  do  not  correlate with the velocity gradient of the visible op-tic fl  ow  pattern,  a  confl  ict  arises  which  leads  to  breakdown of motion-distance invariance. This po-tentially results in distortions of perceived distances to  objects  as  visual  and  vestibular  signals  are  non-concordant. We assessed this prediction by varying the  gain  between  the  observer's  physical  head  mo-tion  and  simulated  motion.  Given  that  the  relative  and absolute motion parallax would be greater than expected  from  head  motion  when  gain  was  greater  than 1.0, we anticipated that this manipulation would result  in  objects  appearing  closer  to  the  observer.  Using an HMD, we presented targets 1 to 3 meters away  from  the  observer  within  a  cue  rich  environ-ment  with  textured  walls  and  fl oors.  Participants stood and swayed laterally at a rate of 0.5 Hz. Lat-eral gain was applied by amplifying their real posi-tion by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environ-ment. After presentation, the target disappeared, and the participant performed a blind walk and reached for  it.  Their  hand  position  was  recorded,  and  we  computed positional errors relative to the target. We found no effect of our motion parallax gain manipu-lation  on  binocular  reaching  accuracy.  To  evaluate  the role of stereopsis in counteracting the anticipated distortion in perceived space, we tested observers on the  same  task  monocularly.  In  this  case,  distances  were perceived as nearer as gain increased, but the effects were relatively small. Taken together our re-sults suggest that observers are fl exible in their inter-pretation   of   observer   produced   motion   parallax   during active head movement. This provides consid-erable tolerance of spatial perception to mismatches between  physical  and  virtual  motion  in  rich  virtual  environments},\n\tauthor = {Cutone, M. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Vestibular Oriented Research Meeting, Journal of Vestibular Research},\n\tdate-added = {2020-05-21 13:02:13 -0400},\n\tdate-modified = {2020-07-07 13:48:40 -0400},\n\tdoi = {10.3233/VES-200699},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {139},\n\ttitle = {Distance perception when real and virtual head motion do not match},\n\tvolume = {30},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.3233/VES-200699}}\n\n
\n
\n\n\n
\n For self-generated motion parallax, a sense of head velocity is needed to estimate distance from object motion (1). This information can be obtained from vestibular, proprioceptive, and visual sourc-es. If the magnitude of efferent signals from the ves-tibular system produced by head motion do not correlate with the velocity gradient of the visible op-tic fl ow pattern, a confl ict arises which leads to breakdown of motion-distance invariance. This po-tentially results in distortions of perceived distances to objects as visual and vestibular signals are non-concordant. We assessed this prediction by varying the gain between the observer's physical head mo-tion and simulated motion. Given that the relative and absolute motion parallax would be greater than expected from head motion when gain was greater than 1.0, we anticipated that this manipulation would result in objects appearing closer to the observer. Using an HMD, we presented targets 1 to 3 meters away from the observer within a cue rich environ-ment with textured walls and fl oors. Participants stood and swayed laterally at a rate of 0.5 Hz. Lat-eral gain was applied by amplifying their real posi-tion by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environ-ment. After presentation, the target disappeared, and the participant performed a blind walk and reached for it. Their hand position was recorded, and we computed positional errors relative to the target. We found no effect of our motion parallax gain manipu-lation on binocular reaching accuracy. To evaluate the role of stereopsis in counteracting the anticipated distortion in perceived space, we tested observers on the same task monocularly. In this case, distances were perceived as nearer as gain increased, but the effects were relatively small. Taken together our re-sults suggest that observers are fl exible in their inter-pretation of observer produced motion parallax during active head movement. This provides consid-erable tolerance of spatial perception to mismatches between physical and virtual motion in rich virtual environments\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Validity Testing the NeuLog Galvanic Skin Response Device.\n \n \n \n \n\n\n \n Flagler, T., Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 11-14, 2020. Toronto, Canada, pages 3964-3968, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"ValidityPaper\n  \n \n \n \"Validity-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Flagler:aa,\n\tannote = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)October 11-14, 2020. Toronto, Canada},\n\tauthor = {Flagler, T. and Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 11-14, 2020. Toronto, Canada},\n\tdate-added = {2020-09-27 15:00:22 -0400},\n\tdate-modified = {2020-12-02 13:12:47 -0500},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {3964-3968},\n\ttitle = {Validity Testing the NeuLog Galvanic Skin Response Device},\n\turl = {papers/flaglerSMC.pdf},\n\tyear = {2020},\n\turl-1 = {papers/flaglerSMC.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion matters: Comparing presence induced by two locomotion interfaces using decision-making tasks in virtual reality.\n \n \n \n \n\n\n \n Kuo, C., & Allison, R. S.\n\n\n \n\n\n\n In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 11-14, 2020. Toronto, Canada, pages 3283-3290, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n \n \"Motion-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Kuo:ab,\n\tabstract = {Virtual environments can replicate the visual appearance of terrain conditions, but the movements involved in using the interfaces confer their own bodily sensations, which can be incongruent with the visual presentation. If naturalness of interaction is a major factor contributing to the feeling of presence, it follows that a more natural locomotion interface should facilitate better presence, indicated by more natural locomotor behaviors. Here we propose a framework for studying the interaction of different locomotion interfaces with visual information on navigation decisions in virtual environments. We validated this framework by performing a user study that compared decisions made using a dual joystick gamepad with a walking-in-place metaphor. The paths presented on a given trial differed visually in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. In this experiment, choices made with the walking-in-place interface more closely matched visual conditions which would minimize energy expenditure or physical risk in the natural world. We provide some observations that would further improve this method in future implementations. This approach provides a way of both studying factors in perceptual decision making and demonstrates the effect of interface on presence as reflected by natural behavior.},\n\tannote = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)October 11-14, 2020. Toronto, Canada},\n\tauthor = {Kuo, C. and Allison, R. S.},\n\tbooktitle = {2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 11-14, 2020. Toronto, Canada},\n\tdate-added = {2020-09-27 15:00:22 -0400},\n\tdate-modified = {2020-12-02 13:12:58 -0500},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {3283-3290},\n\ttitle = {Motion matters: Comparing presence induced by two locomotion interfaces using decision-making tasks in virtual reality},\n\turl = {papers/kuoSMC.pdf},\n\tyear = {2020},\n\turl-1 = {papers/kuoSMC.pdf}}\n\n
\n
\n\n\n
\n Virtual environments can replicate the visual appearance of terrain conditions, but the movements involved in using the interfaces confer their own bodily sensations, which can be incongruent with the visual presentation. If naturalness of interaction is a major factor contributing to the feeling of presence, it follows that a more natural locomotion interface should facilitate better presence, indicated by more natural locomotor behaviors. Here we propose a framework for studying the interaction of different locomotion interfaces with visual information on navigation decisions in virtual environments. We validated this framework by performing a user study that compared decisions made using a dual joystick gamepad with a walking-in-place metaphor. The paths presented on a given trial differed visually in one of the following aspects: (a) incline, (b) friction, (c) texture, and (d) width. In this experiment, choices made with the walking-in-place interface more closely matched visual conditions which would minimize energy expenditure or physical risk in the natural world. We provide some observations that would further improve this method in future implementations. This approach provides a way of both studying factors in perceptual decision making and demonstrates the effect of interface on presence as reflected by natural behavior.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optical distortions in VR bias the perceived slant of moving surfaces.\n \n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 73-79, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"OpticalPaper\n  \n \n \n \"Optical-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 7 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Tong:ft,\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},\n\tdate-added = {2020-08-06 11:23:23 -0400},\n\tdate-modified = {2023-10-27 11:11:12 -0400},\n\tdoi = {10.1109/ISMAR50242.2020.00027},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {73-79},\n\ttitle = {Optical distortions in {VR} bias the perceived slant of moving surfaces},\n\turl = {https://percept.eecs.yorku.ca/papers/ISMAR_2020_VGTC_format.pdf},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1109/ISMAR50242.2020.00027}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of a Constant Camera Rotation on the Visibility of Transsaccadic Camera Shifts.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R. S.\n\n\n \n\n\n\n In ACM Symposium on Eye Tracking Research and Applications, pages Article No. 14, 1–8, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Keyvanara:pz,\n\tannote = {Stuttgart Germany  June 2020 (not held COVID)},\n\tauthor = {Keyvanara, Maryam and Allison, R. S.},\n\tbooktitle = {ACM Symposium on Eye Tracking Research and Applications},\n\tdate-added = {2020-04-30 11:01:34 -0400},\n\tdate-modified = {2020-09-29 09:53:32 -0400},\n\tdoi = {10.1145/3379155.3391318},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {Article No. 14, 1--8},\n\ttitle = {Effect of a Constant Camera Rotation on the Visibility of Transsaccadic Camera Shifts},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1145/3379155.3391318}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subjective Assessment of Stereoscopic Image Quality: The Impact of Visually Lossless Compression.\n \n \n \n \n\n\n \n Mohona, S., Au, D., Kio, O. G., Robinson, R., Hou, Y., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In QoMEX International Conference on Quality of Multimedia Experience, pages 1-6, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Subjective-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Mohona:2020tz,\n\tabstract = {In stereoscopic displays different images are\npresented separately to the left and right eyes. This requirement\nmay increase the bandwidth demand as well as increase the\noccurrence of visible compression-related artefacts. Here we\nreport the results of a large-scale subjective assessment of high\ndynamic range (HDR) stereoscopic image compression. The\nISO/IEC 29170-2 flicker paradigm was adapted for stereoscopic\nimages and used to evaluate two VESA (Video Electronics\nStandards Association) image compression codecs: DSC 1.2a\nand VDCM 1.2.2. We compared the performance on\nstereoscopic images versus 2D images for both codecs.},\n\tannote = {Althone Ireland May 26-28 2020},\n\tauthor = {Mohona, S.S. and Au, D. and Kio, O. G. and Robinson, R. and Hou, Y. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {QoMEX International Conference on Quality of Multimedia Experience},\n\tdate-added = {2020-04-30 10:59:28 -0400},\n\tdate-modified = {2020-09-29 09:55:01 -0400},\n\tdoi = {10.1109/QoMEX48832.2020.9123129.},\n\tkeywords = {Stereopsis},\n\tpages = {1-6},\n\ttitle = {Subjective Assessment of Stereoscopic Image Quality: The Impact of Visually Lossless Compression},\n\tyear = {2020},\n\turl-1 = {https://doi.org/10.1109/QoMEX48832.2020.9123129.}}\n\n
\n
\n\n\n
\n In stereoscopic displays different images are presented separately to the left and right eyes. This requirement may increase the bandwidth demand as well as increase the occurrence of visible compression-related artefacts. Here we report the results of a large-scale subjective assessment of high dynamic range (HDR) stereoscopic image compression. The ISO/IEC 29170-2 flicker paradigm was adapted for stereoscopic images and used to evaluate two VESA (Video Electronics Standards Association) image compression codecs: DSC 1.2a and VDCM 1.2.2. We compared the performance on stereoscopic images versus 2D images for both codecs.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n ICECUBE LED Display [ILDm^3].\n \n \n \n\n\n \n Hosale, M., Madsen, J., & Allison, R.\n\n\n \n\n\n\n Interactive Installation in Disruptive Design and Digital Fabrication, Gayles Gallery, York University, 02 2020.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Hosale:2020qe,\n\tannote = {Hosale, Mark-David, Jim Madsen, and Robert Allison ICECUBE LED Display [ILDm^3], Interactive Installation in Disruptive Design and Digital Fabrication, Gayles Gallery, York University. February 3rd -- 13th 2020.
},\n\tauthor = {Mark-David Hosale and Jim Madsen and Robert Allison},\n\tdate-added = {2020-10-27 13:54:50 -0400},\n\tdate-modified = {2020-10-27 13:55:07 -0400},\n\thowpublished = {Interactive Installation in Disruptive Design and Digital Fabrication, Gayles Gallery, York University},\n\tkeywords = {Misc.},\n\tmonth = {02},\n\ttitle = {ICECUBE LED Display [ILDm^3]},\n\tyear = {2020}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Radial distortions in VR displays impact the perception of surface slant.\n \n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n Journal of Imaging Science and Technology, 63(6): 60409.1 - 60409.11. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Radial-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tong:ab,\n\tabstract = {Modern virtual reality (VR) headsets use lenses that distort the visual field, typically with distortion increasing with eccentricity. While content is pre-warped to counter this radial distortion, residual image distortions remain. Here we examine the extent to which such residual distortion impacts the perception of surface slant. In Experiment 1, we presented slanted surfaces in a head-mounted display and observers estimated the local surface slant at different locations. In Experiments 2 (slant estimation) and 3 (slant discrimination), we presented stimuli on a mirror stereoscope, which allowed us to more precisely control viewing and distortion parameters. Taken together, our results show that radial distortion has significant impact on perceived surface attitude, even following correction. Of the distortion levels we tested, 5\\% distortion results  in significantly underestimated and less precise slant estimates  relative to distortion-free surfaces. In contrast, Experiment 3 reveals that a level of 1\\% distortion is insufficient to produce significant changes in slant perception. Our results highlight the importance of adequately modeling and correcting lens distortion to improve VR user experience.},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tdate-added = {2019-11-08 16:00:34 -0500},\n\tdate-modified = {2020-04-30 15:31:26 -0400},\n\tdoi = {10.2352/J.ImagingSci.Technol.2019.63.6.060409},\n\tjournal = {Journal of Imaging Science and Technology},\n\tkeywords = {Stereopsis},\n\tnumber = {6},\n\tpages = {60409.1 - 60409.11},\n\ttitle = {Radial distortions in {VR} displays impact the perception of surface slant},\n\tvolume = {63},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.2352/J.ImagingSci.Technol.2019.63.6.060409}}\n\n
\n
\n\n\n
\n Modern virtual reality (VR) headsets use lenses that distort the visual field, typically with distortion increasing with eccentricity. While content is pre-warped to counter this radial distortion, residual image distortions remain. Here we examine the extent to which such residual distortion impacts the perception of surface slant. In Experiment 1, we presented slanted surfaces in a head-mounted display and observers estimated the local surface slant at different locations. In Experiments 2 (slant estimation) and 3 (slant discrimination), we presented stimuli on a mirror stereoscope, which allowed us to more precisely control viewing and distortion parameters. Taken together, our results show that radial distortion has significant impact on perceived surface attitude, even following correction. Of the distortion levels we tested, 5% distortion results in significantly underestimated and less precise slant estimates relative to distortion-free surfaces. In contrast, Experiment 3 reveals that a level of 1% distortion is insufficient to produce significant changes in slant perception. Our results highlight the importance of adequately modeling and correcting lens distortion to improve VR user experience.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The impact of retinal motion on stereoacuity for physical targets.\n \n \n \n \n\n\n \n Cutone, M, Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n Vision Research, 161: 43-51. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Cutone:fv,\n\tauthor = {Cutone, M and Allison, R. S. and Wilcox, L. M.},\n\tdate-added = {2019-06-08 18:33:06 -0400},\n\tdate-modified = {2019-07-16 07:39:31 -0400},\n\tdoi = {10.1016/j.visres.2019.06.003},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tpages = {43-51},\n\ttitle = {The impact of retinal motion on stereoacuity for physical targets},\n\turl-1 = {https://doi.org/10.1016/j.visres.2019.06.003},\n\tvolume = {161},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.1016/j.visres.2019.06.003}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of Frame Rate on Vection and Postural Sway.\n \n \n \n \n\n\n \n Fujii, Y., Kio, O. G., Au, D., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Displays, 58: 33-43. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Fujii:2019kn,\n\tabstract = {The quality of stereoscopic 3D cinematic content is a major determinant for user experience in immersive cinema in both traditional theatres and cinematic virtual reality. One of the most important parameters is the frame rate of the content which has historically been 24 frames per second for movies, but higher frame rates are being considered for cinema and are standard for virtual reality. A typical behavioural response to immersive stereoscopic 3D content is vection, the visually-induced perception of self-motion elicited by moving scenes. In this work we investigated how participants' vection varied with simulated virtual camera speed, frame rate, and motion blur produced by the virtual camera's exposure, while viewing depictions of movement through a realistic virtual environment. We also investigated how their postural sway varied with these parameters and how sway covaried with levels of perceived self-motion. Results show that while average perceived vection significantly increased with 3D content frame rate and motion speed, motion blur had no significant effect on perceived vection. We also found that levels of postural sway induced by vection correlated positively with subjective ratings. },\n\tauthor = {Fujii, Y. and Kio, O. G.. and Au, D. and Wilcox, L. M. and Allison, R. S.},\n\tdate-added = {2019-04-11 16:02:11 -0400},\n\tdate-modified = {2019-06-08 18:29:35 -0400},\n\tdoi = {10.1016/j.displa.2019.03.002},\n\tjournal = {Displays},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {33-43},\n\ttitle = {Effects of Frame Rate on Vection and Postural Sway},\n\turl = {http://percept.eecs.yorku.ca/papers/Effects of Frame Rate on Vection.pdf},\n\turl-1 = {https://doi.org/10.1016/j.displa.2019.03.002},\n\tvolume = {58},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.1016/j.displa.2019.03.002}}\n\n
\n
\n\n\n
\n The quality of stereoscopic 3D cinematic content is a major determinant for user experience in immersive cinema in both traditional theatres and cinematic virtual reality. One of the most important parameters is the frame rate of the content which has historically been 24 frames per second for movies, but higher frame rates are being considered for cinema and are standard for virtual reality. A typical behavioural response to immersive stereoscopic 3D content is vection, the visually-induced perception of self-motion elicited by moving scenes. In this work we investigated how participants' vection varied with simulated virtual camera speed, frame rate, and motion blur produced by the virtual camera's exposure, while viewing depictions of movement through a realistic virtual environment. We also investigated how their postural sway varied with these parameters and how sway covaried with levels of perceived self-motion. Results show that while average perceived vection significantly increased with 3D content frame rate and motion speed, motion blur had no significant effect on perceived vection. We also found that levels of postural sway induced by vection correlated positively with subjective ratings. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Higher-order cognitive processes moderate body tilt effects in vection.\n \n \n \n \n\n\n \n Guterman, P., & Allison, R. S.\n\n\n \n\n\n\n Displays, 58: 44-55. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Higher-orderPaper\n  \n \n \n \"Higher-order-1\n  \n \n \n \"Higher-order-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Guterman:aa,\n\tabstract = {Changing head orientation with respect to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether varying head orientation and optic flow direction relative to gravity affects the perception of visually induced self-motion (vection) in two experiments. We confirmed that vertical optic flow produces stronger vection than horizontal optic flow in upright observers. We hypothesized that if this was due to aligning the simulated self-motion with gravity, then interaural (as opposed to spinal) axis motion while lying on the side would provide a similar vection advantage. Alternatively, motion along the spinal axis could enhance vection regardless of head orientation relative to gravity. Finally, we hypothesized that observer expectation and experience with upright locomotion would favour horizontal vection, especially when in upright posture. \n\nIn the first experiment, observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern that simulated observer motion along the spinal or interaural axis. Vection magnitude estimates, onset, and duration were recorded. Aligning the optic flow direction with gravity enhanced vection in side-laying observers as reflected by either a bias for interaural rather than spinal flow or by an elimination/reduction of the spinal advantage compared to upright. However, when overlapping these signals was not possible---as in the supine and prone posture---spinal axis motion enhanced vection. Furthermore, perceived scene structure varied with head orientation (e.g., dots were seen as floating bubbles in some conditions). \n\nTo examine the influence of scene structure, in the second experiment we compared vection during simulated motion with respect to two environments: a rigid pipe structure that looked like a complex arrangement of plumbing pipes, and a field of dots. Interestingly, vertical optic flow with the pipes stimulus produced a similar experience to that of riding an elevator and tended to enhance vection. \n\nOverall, we found that vection depended on the direction of both the head orientation and visual motion relative to gravity, but was also influenced by the perceived scene context. These findings suggest that, in addition to head tilt relative to gravity, that higher-order cognitive processes play a key part in the perception of self-motion.},\n\tauthor = {Guterman, P. and Allison, R. S.},\n\tdate-added = {2019-03-28 16:58:50 -0400},\n\tdate-modified = {2019-06-08 18:30:09 -0400},\n\tdoi = {10.1016/j.displa.2019.03.004},\n\tjournal = {Displays},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {44-55},\n\ttitle = {Higher-order cognitive processes moderate body tilt effects in vection},\n\turl = {http://percept.eecs.yorku.ca/papers/Guterman posture paper preprint.pdf},\n\turl-1 = {https://doi.org/10.1016/j.displa.2019.03.004},\n\tvolume = {58},\n\tyear = {2019},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Guterman%20posture%20paper%20preprint.pdf},\n\turl-2 = {https://doi.org/10.1016/j.displa.2019.03.004}}\n\n
\n
\n\n\n
\n Changing head orientation with respect to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether varying head orientation and optic flow direction relative to gravity affects the perception of visually induced self-motion (vection) in two experiments. We confirmed that vertical optic flow produces stronger vection than horizontal optic flow in upright observers. We hypothesized that if this was due to aligning the simulated self-motion with gravity, then interaural (as opposed to spinal) axis motion while lying on the side would provide a similar vection advantage. Alternatively, motion along the spinal axis could enhance vection regardless of head orientation relative to gravity. Finally, we hypothesized that observer expectation and experience with upright locomotion would favour horizontal vection, especially when in upright posture. In the first experiment, observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern that simulated observer motion along the spinal or interaural axis. Vection magnitude estimates, onset, and duration were recorded. Aligning the optic flow direction with gravity enhanced vection in side-laying observers as reflected by either a bias for interaural rather than spinal flow or by an elimination/reduction of the spinal advantage compared to upright. However, when overlapping these signals was not possible—as in the supine and prone posture—spinal axis motion enhanced vection. Furthermore, perceived scene structure varied with head orientation (e.g., dots were seen as floating bubbles in some conditions). To examine the influence of scene structure, in the second experiment we compared vection during simulated motion with respect to two environments: a rigid pipe structure that looked like a complex arrangement of plumbing pipes, and a field of dots. Interestingly, vertical optic flow with the pipes stimulus produced a similar experience to that of riding an elevator and tended to enhance vection. Overall, we found that vection depended on the direction of both the head orientation and visual motion relative to gravity, but was also influenced by the perceived scene context. These findings suggest that, in addition to head tilt relative to gravity, that higher-order cognitive processes play a key part in the perception of self-motion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The A-effect and global motion.\n \n \n \n \n\n\n \n Guterman, P. S., & Allison, R. S.\n\n\n \n\n\n\n Vision, 3(2): Article 13. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Guterman:db,\n\tabstract = {When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion, or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical, and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions---and for object and self-motion---may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.},\n\tauthor = {Guterman, P. S. and Allison, R. S.},\n\tdate-added = {2019-03-28 08:55:34 -0400},\n\tdate-modified = {2019-04-13 15:01:54 -0400},\n\tdoi = {10.3390/vision3020013},\n\tjournal = {Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\tpages = {Article 13},\n\ttitle = {The A-effect and global motion},\n\turl = {http://percept.eecs.yorku.ca/papers/pearl Aeffect.pdf},\n\turl-1 = {https://doi.org/10.3390/vision3020013%20},\n\tvolume = {3},\n\tyear = {2019},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/pearl%20Aeffect.pdf},\n\turl-2 = {https://doi.org/10.3390/vision3020013}}\n\n
\n
\n\n\n
\n When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion, or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical, and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monovision: Consequences for depth perception from large disparities.\n \n \n \n \n\n\n \n Smith, C., Allison, R. S., Wilkinson, F., & Wilcox, L. M.\n\n\n \n\n\n\n Experimental Eye Research, 183: 62-67. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Monovision:Paper\n  \n \n \n \"Monovision:-1\n  \n \n \n \"Monovision:-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Smith:2019aa,\n\tabstract = {Recent studies have confirmed that monovision treatment degrades stereopsis but it is not clear if these effects are limited to fine disparity processing, or how they are affected by viewing distance or age. Given the link between stereopsis and postural stability, it is important that we have full understanding of the impact of monovision on binocular function.  In this study we assessed the short-term effects of optically induced monovision on a depth-discrimination task for young and older (presbyopic) adults.  In separate sessions, the upper limits of stereopsis were assessed with participants' best optical correction and with monovision ( -1D and +1D lenses in front of the dominant and non-dominant eyes respectively), at both near (62 cm) and far (300 cm) viewing distances. Monovision viewing resulted in significant reductions in the upper limit of stereopsis or more generally in discrimination performance at large disparities, in both age groups at a viewing distance of 300 cm. Dynamic photorefraction performed on a sample of four young observers revealed that they tended to accommodate to minimize blur in one eye at the expense of blur in the other. Older participants would have experienced roughly equivalent blur in the two eyes. Despite this difference, both groups displayed similar detrimental effects of monovision. In addition, we find that discrimination accuracy was worse with monovision at the 3m viewing distance which involves fixation distances that are typical during walking. These data suggest that stability during locomotion may be compromised, a factor that is of concern for our older participants. },\n\tauthor = {Smith, C. and Allison, R. S. and Wilkinson, F. and Wilcox, L. M.},\n\tdate-added = {2018-11-27 12:58:13 -0500},\n\tdate-modified = {2019-06-18 07:23:24 -0400},\n\tdoi = {10.1016/j.exer.2018.09.005},\n\tjournal = {Experimental Eye Research},\n\tkeywords = {Stereopsis},\n\tpages = {62-67},\n\ttitle = {Monovision: Consequences for depth perception from large disparities},\n\turl = {http://percept.eecs.yorku.ca/papers/Monovision.pdf},\n\turl-1 = {https://doi.org/10.1016/j.exer.2018.09.005},\n\tvolume = {183},\n\tyear = {2019},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Monovision.pdf},\n\turl-2 = {https://doi.org/10.1016/j.exer.2018.09.005}}\n\n
\n
\n\n\n
\n Recent studies have confirmed that monovision treatment degrades stereopsis but it is not clear if these effects are limited to fine disparity processing, or how they are affected by viewing distance or age. Given the link between stereopsis and postural stability, it is important that we have full understanding of the impact of monovision on binocular function. In this study we assessed the short-term effects of optically induced monovision on a depth-discrimination task for young and older (presbyopic) adults. In separate sessions, the upper limits of stereopsis were assessed with participants' best optical correction and with monovision ( -1D and +1D lenses in front of the dominant and non-dominant eyes respectively), at both near (62 cm) and far (300 cm) viewing distances. Monovision viewing resulted in significant reductions in the upper limit of stereopsis or more generally in discrimination performance at large disparities, in both age groups at a viewing distance of 300 cm. Dynamic photorefraction performed on a sample of four young observers revealed that they tended to accommodate to minimize blur in one eye at the expense of blur in the other. Older participants would have experienced roughly equivalent blur in the two eyes. Despite this difference, both groups displayed similar detrimental effects of monovision. In addition, we find that discrimination accuracy was worse with monovision at the 3m viewing distance which involves fixation distances that are typical during walking. These data suggest that stability during locomotion may be compromised, a factor that is of concern for our older participants. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (11)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Effect of Long Duration Hypogravity on the Perception of Self-Motion – VECTION.\n \n \n \n \n\n\n \n Bury, N., Jenkin, M., Allison, R., McManus, M., & Harris, L. R.\n\n\n \n\n\n\n In 4th German Human Physiology Workshop, pages 8. DLR, 2019.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bury:2019rw,\n\tauthor = {Nils-Alexander Bury and Michael Jenkin and Robert Allison and Meaghan McManus and Laurence R. Harris},\n\tbooktitle = {4th German Human Physiology Workshop},\n\tdate-added = {2020-10-27 13:43:31 -0400},\n\tdate-modified = {2020-10-27 13:43:31 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {8},\n\tpublisher = {DLR},\n\ttitle = {The Effect of Long Duration Hypogravity on the Perception of Self-Motion -- VECTION},\n\turl = {https://www.dlr.de/me/en/Portaldata/25/Resources/dokumente/veranstaltungen/veranstaltungen_2019/hpw_2019/Tagungsband_HPW2019_final.pdf},\n\tyear = {2019},\n\turl-1 = {https://www.dlr.de/me/en/Portaldata/25/Resources/dokumente/veranstaltungen/veranstaltungen_2019/hpw_2019/Tagungsband_HPW2019_final.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Role of Binocular Vision in Stepping over Obstacles and Gaps in Virtual Environment.\n \n \n \n\n\n \n Allison, R., & Zhao, J.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 19, pages 222b–222b. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2019role,\n\tabstract = {Little is known about the role of stereopsis in locomotion activities, such as continuous walking and running. While previous studies have shown that stereopsis improves the accuracy of lower limb movements while walking in constrained spaces, it is still unclear whether stereopsis aids continuous locomotion during extended motion over longer distance. We conducted two walking experiments in virtual environments to investigate the role of binocular vision in avoiding virtual obstacles and traversing virtual gaps during continuous walking. The virtual environments were presented on a novel projected display known as the Wide Immersive Stereo Environment (WISE) and the participant locomoted through them on a linear treadmill. This experiment setup provided us with a unique advantage of simulating long-distance walking through an extended environment. In Experiment 1, along each 100-m path were thirty virtual obstacles, ten each at heights of 0.1 m, 0.2 m or 0.3 m, in random order. In Experiment 2, along each 100-m path were thirty virtual gaps, either 0.2 m, 0.3 m or 0.4 m across. During experi-mental sessions, participants were asked to walk at a constant speed of 2 km/h under both stereoscopic viewing and non-stereoscopic viewing condi-tions and step over virtual obstacles or gaps when necessary. By analyzing the gait parameters, such as stride height and stride length, we found that stereoscopic vision helped people to make more accurate steps over virtual obstacles and gaps during continuous walking.\n},\n\tannote = {St. Pete's Beach May 2019},\n\tauthor = {Allison, Robert and Zhao, Jingbo},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2019-09-30 12:36:03 -0400},\n\tdate-modified = {2019-10-11 17:35:06 -0400},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {10},\n\tpages = {222b--222b},\n\ttitle = {The Role of Binocular Vision in Stepping over Obstacles and Gaps in Virtual Environment},\n\tvolume = {19},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n Little is known about the role of stereopsis in locomotion activities, such as continuous walking and running. While previous studies have shown that stereopsis improves the accuracy of lower limb movements while walking in constrained spaces, it is still unclear whether stereopsis aids continuous locomotion during extended motion over longer distance. We conducted two walking experiments in virtual environments to investigate the role of binocular vision in avoiding virtual obstacles and traversing virtual gaps during continuous walking. The virtual environments were presented on a novel projected display known as the Wide Immersive Stereo Environment (WISE) and the participant locomoted through them on a linear treadmill. This experiment setup provided us with a unique advantage of simulating long-distance walking through an extended environment. In Experiment 1, along each 100-m path were thirty virtual obstacles, ten each at heights of 0.1 m, 0.2 m or 0.3 m, in random order. In Experiment 2, along each 100-m path were thirty virtual gaps, either 0.2 m, 0.3 m or 0.4 m across. During experi-mental sessions, participants were asked to walk at a constant speed of 2 km/h under both stereoscopic viewing and non-stereoscopic viewing condi-tions and step over virtual obstacles or gaps when necessary. By analyzing the gait parameters, such as stride height and stride length, we found that stereoscopic vision helped people to make more accurate steps over virtual obstacles and gaps during continuous walking. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Slant perception in the presence of curvature distortion.\n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In VSS 2019, volume 19, pages 222a–222a. Journal of Vision (VSS Abstracts), 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2019ng,\n\tabstract = {In the absence of reliable and abundant depth cues, estimates of surface slant are often biased towards fronto-parallel. Here we investigate the effects of curvature distortions on perceived slant. In general, curvature distortions are predicted to decrease the precision of slant discrimination, and this uncertainty may, in turn, strengthen the fronto-parallel bias. Alternatively, curvature distortion might bias slant perception independently, or in the opposite direction, of the fronto-parallel bias. We rendered images of slanted, textured surfaces with and without radially symmetric distortions (pincushion and barrel) at low (\\approx 1\\%) and high (\\approx 5\\%) levels. Observers judged whether a test image (distorted or undistorted, with a variety of slants) was more slanted than a distortion-free surface with a 15$\\^{\\circ}$ slant. We fit the psychometric data with a cumulative normal function and estimated bias and discrimi-nation thresholds for each observer. Our results showed that 1\\% distortion had no measurable impact on slant discrimination. At 5\\%, both types of distortion significantly increased slant discrimination thresholds. However, only the pincushion distortion produced a systematic underestimation of perceived slant. Slant underestimation in the presence of pincushion distortion is consistent with the hypothesized effect of disparity smoothing operations. Under this hypothesis, slant should also be underestimated in the barrel distortion condition but it is not. To test the possibility that this type of curvature distortion introduces additional perceptual biases, in ongoing experiments we are measuring perceived slant magnitude in the presence and absence of curvature distortion. These suprathreshold estimates will provide a baseline for the fronto-parallel bias in isolation; additional biases in the distortion conditions could then be modelled as distortion-based effects.},\n\tannote = {St. Petes Beach May 2019},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {VSS 2019},\n\tdate-added = {2019-08-14 08:59:34 -0400},\n\tdate-modified = {2019-10-11 17:36:10 -0400},\n\tkeywords = {Stereopsis},\n\tnumber = {10},\n\tpages = {222a--222a},\n\tpublisher = {Journal of Vision (VSS Abstracts)},\n\ttitle = {Slant perception in the presence of curvature distortion},\n\tvolume = {19},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n In the absence of reliable and abundant depth cues, estimates of surface slant are often biased towards fronto-parallel. Here we investigate the effects of curvature distortions on perceived slant. In general, curvature distortions are predicted to decrease the precision of slant discrimination, and this uncertainty may, in turn, strengthen the fronto-parallel bias. Alternatively, curvature distortion might bias slant perception independently, or in the opposite direction, of the fronto-parallel bias. We rendered images of slanted, textured surfaces with and without radially symmetric distortions (pincushion and barrel) at low (≈ 1%) and high (≈ 5%) levels. Observers judged whether a test image (distorted or undistorted, with a variety of slants) was more slanted than a distortion-free surface with a 15$∘̂$ slant. We fit the psychometric data with a cumulative normal function and estimated bias and discrimi-nation thresholds for each observer. Our results showed that 1% distortion had no measurable impact on slant discrimination. At 5%, both types of distortion significantly increased slant discrimination thresholds. However, only the pincushion distortion produced a systematic underestimation of perceived slant. Slant underestimation in the presence of pincushion distortion is consistent with the hypothesized effect of disparity smoothing operations. Under this hypothesis, slant should also be underestimated in the barrel distortion condition but it is not. To test the possibility that this type of curvature distortion introduces additional perceptual biases, in ongoing experiments we are measuring perceived slant magnitude in the presence and absence of curvature distortion. These suprathreshold estimates will provide a baseline for the fronto-parallel bias in isolation; additional biases in the distortion conditions could then be modelled as distortion-based effects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Viewers' Sensitivity to Camera Motion during Saccades in a Virtual Environment.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R.\n\n\n \n\n\n\n In Proceedings of 20th European Conference on Eye Movements. Journal of Eye Movement Research, volume 12, pages 214. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Viewers'-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Keyvanara:2019rf,\n\tabstract = {Gaze-contingent displays use real-time eye movement data to adjust the display content according to user's\ngaze. Display updates must happen fast enough to prevent the user from noticing them. Saccadic suppression\nhelps hide these updates. The aim of this study was to investigate which image transformations are less\nperceptible and hence more applicable during saccadic suppression periods. We designed our experimental\nenvironments in Unity3D and used an Eyelink1000 to sample the participants' gaze in real time. Participants\nviewed 3D scenes in which the camera panned from left to right at a constant rotational velocity. During this\nmotion they made a horizontal (lrightward) or vertical (downward) saccade during which a sudden movement\nof the camera transformed the image of the scene. Camera movements were one of 6 translation and 4\nrotational directions. Following the trial participants indicated the direction of the change in a 2AFC task.\nDiscrimination thresholds for each type of transformation were estimated using an adaptive procedure to\nfit a Weibull psychometric function. During both horizontal and vertical saccades, thresholds were higher for\nhorizontal translational and rotational camera movements than for other transformations. Further experiments\nare being conducted to determine if this generalizes but the current results imply that the direction of camera\nmotion affects the detectability of camera transitions during saccades. Understanding the relationship\nbetween on-going movements and the detectability of a sudden transsaccadic change can help provide a\nbetter user experience for users of VR that hide graphical updates when they generate a saccade.},\n\tannote = {18-22 August 2019 Alicante Spain},\n\tauthor = {Keyvanara, Maryam and Allison, Robert},\n\tbooktitle = {Proceedings of 20th European Conference on Eye Movements. Journal of Eye Movement Research},\n\tdate-added = {2019-08-14 08:42:45 -0400},\n\tdate-modified = {2019-12-25 22:28:04 -0500},\n\tjournal = {Journal of Eye Movement Research},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {7},\n\tpages = {214},\n\ttitle = {Viewers' Sensitivity to Camera Motion during Saccades in a Virtual Environment},\n\turl-1 = {https://doi.org/10.1177/0301006618824879},\n\tvolume = {12},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n Gaze-contingent displays use real-time eye movement data to adjust the display content according to user's gaze. Display updates must happen fast enough to prevent the user from noticing them. Saccadic suppression helps hide these updates. The aim of this study was to investigate which image transformations are less perceptible and hence more applicable during saccadic suppression periods. We designed our experimental environments in Unity3D and used an Eyelink1000 to sample the participants' gaze in real time. Participants viewed 3D scenes in which the camera panned from left to right at a constant rotational velocity. During this motion they made a horizontal (lrightward) or vertical (downward) saccade during which a sudden movement of the camera transformed the image of the scene. Camera movements were one of 6 translation and 4 rotational directions. Following the trial participants indicated the direction of the change in a 2AFC task. Discrimination thresholds for each type of transformation were estimated using an adaptive procedure to fit a Weibull psychometric function. During both horizontal and vertical saccades, thresholds were higher for horizontal translational and rotational camera movements than for other transformations. Further experiments are being conducted to determine if this generalizes but the current results imply that the direction of camera motion affects the detectability of camera transitions during saccades. Understanding the relationship between on-going movements and the detectability of a sudden transsaccadic change can help provide a better user experience for users of VR that hide graphical updates when they generate a saccade.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Subjective Assessment of Image Compression Artefacts in 2D viewing versus Stereoscopic Viewing.\n \n \n \n\n\n \n Mohona, S., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Predictive Vision. CVR Conference, June 10-13, 2019, Toronto, Canada, 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Mohona:2019aa,\n\tabstract = {he Centre for Vision Research and Vision: Science to Applications INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019},\n\tannote = {INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019NEW STUDENT CENTREYORK UNIVERSITY},\n\tauthor = {Mohona, S. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Predictive Vision},\n\tdate-added = {2019-06-15 16:30:42 -0400},\n\tdate-modified = {2019-06-15 16:31:12 -0400},\n\tkeywords = {Stereopsis},\n\tpublisher = {CVR Conference, June 10-13, 2019, Toronto, Canada},\n\ttitle = {Subjective Assessment of Image Compression Artefacts in 2D viewing versus Stereoscopic Viewing},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n he Centre for Vision Research and Vision: Science to Applications INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Slant perception in the presence of radial distortions.\n \n \n \n\n\n \n Tong, J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Predictive Vision. CVR Conference, June 10-13, 2019, Toronto, Canada, 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tong:2019aa,\n\tannote = {INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019NEW STUDENT CENTREYORK UNIVERSITY},\n\tauthor = {Tong, J. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Predictive Vision},\n\tdate-added = {2019-06-15 16:30:42 -0400},\n\tdate-modified = {2019-06-15 16:31:22 -0400},\n\tkeywords = {Stereopsis},\n\tpublisher = {CVR Conference, June 10-13, 2019, Toronto, Canada},\n\ttitle = {Slant perception in the presence of radial distortions},\n\tyear = {2019}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stereoscopic 3D movie parameters on vection and postural sway.\n \n \n \n\n\n \n Kio, O. G., Fujii, Y., Au, D., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Predictive Vision. CVR Conference, June 10-13, 2019, Toronto, Canada, 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kio:2019aa,\n\tannote = {INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019NEW STUDENT CENTREYORK UNIVERSITY},\n\tauthor = {Kio, O. G. and Fujii, Y. and Au, D. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Predictive Vision},\n\tdate-added = {2019-06-15 16:30:42 -0400},\n\tdate-modified = {2019-06-15 16:31:51 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpublisher = {CVR Conference, June 10-13, 2019, Toronto, Canada},\n\ttitle = {Effects of stereoscopic 3D movie parameters on vection and postural sway},\n\tyear = {2019}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Depth perception from monocular self-occlusions.\n \n \n \n\n\n \n Au, D., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Predictive Vision. CVR Conference, June 10-13, 2019, Toronto, Canada, 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Au:2019aa,\n\tannote = {INTERNATIONAL CONFERENCE ON PREDICTIVE VISIONJUNE 10-13, 2019NEW STUDENT CENTREYORK UNIVERSITY},\n\tauthor = {Au, D. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Predictive Vision},\n\tdate-added = {2019-06-15 16:30:42 -0400},\n\tdate-modified = {2019-06-15 16:32:11 -0400},\n\tkeywords = {Stereopsis},\n\tpublisher = {CVR Conference, June 10-13, 2019, Toronto, Canada},\n\ttitle = {Depth perception from monocular self-occlusions},\n\tyear = {2019}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Saccadic Suppression of Natural Image Transformations.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R.\n\n\n \n\n\n\n In Proceedings of ECVP 2018, Perception, volume 48, pages 71–71. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Saccadic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{keyvanara2019saccadic,\n\tabstract = {During a saccade, the image of the environment moves rapidly across the retina. However, due to saccadic suppression, our perception of motion is attenuated and our visual sensitivity is suppressed. We explored whether the extent of saccadic suppression depends on the type of image transformation. Participants observed three-dimensional scenes and made a vertical or horizontal saccade to follow a target object. During this saccade, the entire scene was translated or rotated along one of the canonical directions. After each trial, participants indicated the direction of the scene change in a forced-choice task. Change detectability depended on the magnitude and type of transformation. During vertical or horizontal saccades, users were least sensitive to vertical or horizontal translations, respectively, and most sensitive to rotations along the roll axis. We conclude that saccadic suppression affects natural image transformations that occur during whole body and head movement through our environment.},\n\tannote = {Trieste Italy August 2018},\n\tauthor = {Keyvanara, Maryam and Allison, Robert},\n\tbooktitle = {Proceedings of ECVP 2018, Perception},\n\tdate-modified = {2019-06-15 11:50:01 -0400},\n\tdoi = {10.1177/0301006618824879},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {S1},\n\tpages = {71--71},\n\ttitle = {Saccadic Suppression of Natural Image Transformations},\n\turl-1 = {https://doi.org/10.1177/0301006618824879},\n\tvolume = {48},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.1177/0301006618824879}}\n\n
\n
\n\n\n
\n During a saccade, the image of the environment moves rapidly across the retina. However, due to saccadic suppression, our perception of motion is attenuated and our visual sensitivity is suppressed. We explored whether the extent of saccadic suppression depends on the type of image transformation. Participants observed three-dimensional scenes and made a vertical or horizontal saccade to follow a target object. During this saccade, the entire scene was translated or rotated along one of the canonical directions. After each trial, participants indicated the direction of the scene change in a forced-choice task. Change detectability depended on the magnitude and type of transformation. During vertical or horizontal saccades, users were least sensitive to vertical or horizontal translations, respectively, and most sensitive to rotations along the roll axis. We conclude that saccadic suppression affects natural image transformations that occur during whole body and head movement through our environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stereoscopic capture and viewing parameters: Geometry and perception (Invited).\n \n \n \n\n\n \n Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Electronic Imaging: Stereoscopic Displays and Applications 2019, pages SD&A-627. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2019hw,\n\tannote = {Monday-Wednesday 14-16 January 2019\nHyatt Regency San Francisco Airport Hotel, Burlingame, California USA, USA. },\n\tauthor = {Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Electronic Imaging: Stereoscopic Displays and Applications 2019},\n\tdate-added = {2019-05-27 09:51:58 -0400},\n\tdate-modified = {2019-06-15 16:36:55 -0400},\n\tkeywords = {Stereopsis},\n\tpages = {SD\\&A-627},\n\ttitle = {Stereoscopic capture and viewing parameters: Geometry and perception (Invited)},\n\tyear = {2019}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Aviation experience and the role of stereopsis in rotary-wing altitude estimation.\n \n \n \n\n\n \n Hartle, B., Sudhama, A., Deas, L. M., Allison, R. S., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n In 90th Annual Meeting of the Aerospace Medical Association (AsMA), Las Vegas, Nevada, USA.. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hartle:2019aa,\n\tabstract = {Introduction:\nThe relevance of stereopsis as a medical selection variable for aviators is a longstanding question in aviation medicine.  In a prior study we observed superior altitude estimation when subjects viewed simulated terrain images in stereoscopic 3D compared to monocular viewing, thus supporting the relevance of stereopsis to aviation (Deas et al., AsMA 2016).  However, in that study we used undergraduates as the subject population. Professional aviators undergo rigorous selection and training that may enhance their use of specific depth cues during altitude estimation.  The present study investigated this possibility by directly comparing the performance of military aviators and undergraduates in the estimation of simulated altitude under binocular and monocular viewing conditions. \n\nMethods:\nThirty-one trained military rotary-wing aircrew and thirty undergraduate observers participated in the experiment.  Stimuli consisted of four high-resolution terrain images depicting a virtual helicopter skid above a ground plane, simulating a low hover scenario.  The rendered altitude of the skid varied from zero to five feet.  Observers were asked to judge the relative distance between the skid and the ground plane under binocular (using active shutter glasses) and monocular (wearing an eye patch) viewing conditions.  \n\nResults:\nThe aviators significantly outperformed the undergraduates in the monocular viewing condition, though for both groups monocular altitude estimates were less accurate than binocular estimates.  During binocular viewing, both groups tended to make accurate altitude estimates; there was no evidence that the aviators were superior to undergraduates when binocular cues were available.  \n\nDiscussion:\nThe finding of superior performance for aviators compared to undergraduates during monocular viewing is consistent with the hypothesis that selection and experience can enhance the use of monocular depth cues.  However, the aviators performed similarly to undergraduates during binocular viewing and both groups were shown to benefit from binocular viewing compared to monocular viewing, suggesting that stereopsis contributes in the same manner to rotary-wing altitude estimation regardless of aviation experience.  Future work might seek to extend these findings to more natural viewing conditions and link individual differences in stereopsis to altitude estimation performance.  \n\nLearning Objective\n1)\tLearn about the relevance of stereopsis to altitude estimation for rotary wing aviators.\n\nMOC Questions\n1)\tStereopsis is based on the processing of binocular cues (T/F). T\n2)\tAltitude can be estimated based on monocular cues (T/F). T\n},\n\tannote = {Hartle, B., Sudhama, A., Deas, L. M., Allison, R. S., Irving, E. L., Glaholt, M. G., \\& Wilcox, L. M.   Aviation experience and the role of stereopsis in altitude estimation.  Presented to the 90th Annual Meeting of the Aerospace Medical Association (AsMA), Las Vegas, Nevada, USA.  May 6, 2019.},\n\tauthor = {Hartle, B. and Sudhama, A. and Deas, L. M. and Allison, R. S. and Irving, E. L. and Glaholt, M. and Wilcox, L. M.},\n\tbooktitle = {90th Annual Meeting of the Aerospace Medical Association (AsMA), Las Vegas, Nevada, USA.},\n\tdate-added = {2019-05-27 09:42:04 -0400},\n\tdate-modified = {2019-05-27 09:42:04 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Aviation experience and the role of stereopsis in rotary-wing altitude estimation},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n Introduction: The relevance of stereopsis as a medical selection variable for aviators is a longstanding question in aviation medicine. In a prior study we observed superior altitude estimation when subjects viewed simulated terrain images in stereoscopic 3D compared to monocular viewing, thus supporting the relevance of stereopsis to aviation (Deas et al., AsMA 2016). However, in that study we used undergraduates as the subject population. Professional aviators undergo rigorous selection and training that may enhance their use of specific depth cues during altitude estimation. The present study investigated this possibility by directly comparing the performance of military aviators and undergraduates in the estimation of simulated altitude under binocular and monocular viewing conditions. Methods: Thirty-one trained military rotary-wing aircrew and thirty undergraduate observers participated in the experiment. Stimuli consisted of four high-resolution terrain images depicting a virtual helicopter skid above a ground plane, simulating a low hover scenario. The rendered altitude of the skid varied from zero to five feet. Observers were asked to judge the relative distance between the skid and the ground plane under binocular (using active shutter glasses) and monocular (wearing an eye patch) viewing conditions. Results: The aviators significantly outperformed the undergraduates in the monocular viewing condition, though for both groups monocular altitude estimates were less accurate than binocular estimates. During binocular viewing, both groups tended to make accurate altitude estimates; there was no evidence that the aviators were superior to undergraduates when binocular cues were available. Discussion: The finding of superior performance for aviators compared to undergraduates during monocular viewing is consistent with the hypothesis that selection and experience can enhance the use of monocular depth cues. However, the aviators performed similarly to undergraduates during binocular viewing and both groups were shown to benefit from binocular viewing compared to monocular viewing, suggesting that stereopsis contributes in the same manner to rotary-wing altitude estimation regardless of aviation experience. Future work might seek to extend these findings to more natural viewing conditions and link individual differences in stereopsis to altitude estimation performance. Learning Objective 1) Learn about the relevance of stereopsis to altitude estimation for rotary wing aviators. MOC Questions 1) Stereopsis is based on the processing of binocular cues (T/F). T 2) Altitude can be estimated based on monocular cues (T/F). T \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Transsaccadic Awareness of Scene Transformations in a 3D Virtual Environment.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R. S.\n\n\n \n\n\n\n In ACM Symposium on Applied Perception (SAP '19), September 19–20, 2019, Barcelona, Spain, pages Article 19, 1-9, 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Transsaccadic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Keyvanara:2019aa,\n\tabstract = {In gaze-contingent displays, the viewer's eye movement data are processed in real-time to adjust the graphical content. To provide a high-quality user experience, these graphical updates must occur with minimum delay. Such updates can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. For such applications, perceptual saccadic suppression can help to hide the graphical artifacts. We investigated whether the visibility of these updates depends on the type of image transformation. Users viewed 3D scenes in which the displacement of a target object triggered them to generate a vertical or horizontal saccade, during which a translation or rotation was applied to the virtual camera used to render the scene. After each trial, users indicated the direction of the scene change in a forced-choice task. Results show that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. We confirm that large 3D adjustments to the scene viewpoint can be introduced unobtrusively and with low latency during saccades, but the allowable extent of the correction varies with the transformation applied.},\n\tannote = {SAP '19, September 19--20, 2019, Barcelona, Spain},\n\tauthor = {Keyvanara, Maryam and Allison, R. S.},\n\tbooktitle = {ACM Symposium on Applied Perception (SAP '19), September 19--20, 2019, Barcelona, Spain},\n\tdate-added = {2019-07-08 22:01:44 -0400},\n\tdate-modified = {2020-01-20 09:41:13 -0500},\n\tdoi = {10.1145/3343036.3343121},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {Article 19, 1-9},\n\ttitle = {Transsaccadic Awareness of Scene Transformations in a 3D Virtual Environment},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.1145/3343036.3343121}}\n\n
\n
\n\n\n
\n In gaze-contingent displays, the viewer's eye movement data are processed in real-time to adjust the graphical content. To provide a high-quality user experience, these graphical updates must occur with minimum delay. Such updates can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. For such applications, perceptual saccadic suppression can help to hide the graphical artifacts. We investigated whether the visibility of these updates depends on the type of image transformation. Users viewed 3D scenes in which the displacement of a target object triggered them to generate a vertical or horizontal saccade, during which a translation or rotation was applied to the virtual camera used to render the scene. After each trial, users indicated the direction of the scene change in a forced-choice task. Results show that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. We confirm that large 3D adjustments to the scene viewpoint can be introduced unobtrusively and with low latency during saccades, but the allowable extent of the correction varies with the transformation applied.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3-4: Stereoscopic Image Quality Assessment.\n \n \n \n \n\n\n \n Au, D., Mohona, S., Cutone, M. D., Hou, Y., Goel, J., Jacobson, N., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In SID Symposium Digest of Technical Papers, volume 50, pages 13-16, 2019. \n \n\n\n\n
\n\n\n\n \n \n \"3-4:-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Au:aa,\n\tauthor = {Au, D. and Mohona, S. and Cutone, M. D. and Hou, Y. and Goel, J. and Jacobson, N. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {SID Symposium Digest of Technical Papers},\n\tdate-added = {2019-06-05 21:44:21 -0400},\n\tdate-modified = {2019-06-15 11:24:05 -0400},\n\tdoi = {10.1002/sdtp.12843},\n\tkeywords = {Image Quality},\n\tnumber = {1},\n\tpages = {13-16},\n\ttitle = {3-4: Stereoscopic Image Quality Assessment},\n\turl-1 = {https://doi.org/10.1002/sdtp.12843},\n\tvolume = {50},\n\tyear = {2019},\n\turl-1 = {https://doi.org/10.1002/sdtp.12843}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Estimates of simulated ground relief as an operational test of stereoacuity for aviators.\n \n \n \n\n\n \n Sudhama, A., Hartle, B., Allison, R. S., Irving, E. L., & Wilcox, L. M.\n\n\n \n\n\n\n Technical Report DRDC-RDDC-2019-C119, Defence Research and Development Canada, 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Sudhama:2019aa,\n\tabstract = {Stereopsis is not currently a visual requirement for aircrew in the Royal Canadian Air force; however, it has been shown to be relevant to some aviation manoeuvers, particularly aerial refueling and landing. Commercial tests of stereoacuity are widely used to assess stereopsis in clinical practice but may not predict performance in real-world scenarios and tasks. In this series of experiments, we have made the first steps towards development of a stereoscopic depth discrimination task using naturalistic stimuli and a task (terrain relief judgement) that is relevant to flight crew. Stimuli consist of a stereoscopically rendered grassy terrain with a central mound or a dip with varying height/depth. We measured thresholds for discrimination of the direction of the depth offset. For comparison and validation of our Terrain test we also measured observers' performance on a set of commercial (Randot, StereoFly) and purpose-designed stereoacuity tests: the Ledge test, the Bar test, and the United States Air Force School of Aerospace Medicine (USAFSAM) Operational-based Vision Assessment (OBVA) Ring stereo test as additional comparison tests. To assess the impact of uninformative 2D shading cues on depth judgements in our Terrain test, we manipulated the intensity of the shading (low and high). Our results show that the Terrain test can be used as a test for stereovision, and thresholds are measureable for most observers in the low shading condition. However, as shading is intensified, a large proportion of observers (30\\%) exhibit a strong convexity bias, resulting in reversals in perceived depth. Although the test is a promising measure of stereo, the bias tends to erode the usefulness in this regard. Currently, our analyses show weak correlation between thresholds obtained using our Terrain test and the other stereoacuity tests. However, this is possibly due to the narrow range of, primarily low, thresholds in this set of observers and additional testing with individuals with a broader range of stereoscopic ability is required. },\n\tauthor = {Sudhama, Aishwarya and Hartle, Brittney and Allison, Robert S. and Irving, Elizabeth L. and Wilcox, Laurie M.},\n\tdate-added = {2018-12-24 13:50:07 -0500},\n\tdate-modified = {2019-09-27 11:02:02 -0400},\n\tinstitution = {Defence Research and Development Canada},\n\tkeywords = {Stereopsis},\n\tnumber = {DRDC-RDDC-2019-C119},\n\ttitle = {Estimates of simulated ground relief as an operational test of stereoacuity for aviators},\n\tyear = {2019}}\n\n
\n
\n\n\n
\n Stereopsis is not currently a visual requirement for aircrew in the Royal Canadian Air force; however, it has been shown to be relevant to some aviation manoeuvers, particularly aerial refueling and landing. Commercial tests of stereoacuity are widely used to assess stereopsis in clinical practice but may not predict performance in real-world scenarios and tasks. In this series of experiments, we have made the first steps towards development of a stereoscopic depth discrimination task using naturalistic stimuli and a task (terrain relief judgement) that is relevant to flight crew. Stimuli consist of a stereoscopically rendered grassy terrain with a central mound or a dip with varying height/depth. We measured thresholds for discrimination of the direction of the depth offset. For comparison and validation of our Terrain test we also measured observers' performance on a set of commercial (Randot, StereoFly) and purpose-designed stereoacuity tests: the Ledge test, the Bar test, and the United States Air Force School of Aerospace Medicine (USAFSAM) Operational-based Vision Assessment (OBVA) Ring stereo test as additional comparison tests. To assess the impact of uninformative 2D shading cues on depth judgements in our Terrain test, we manipulated the intensity of the shading (low and high). Our results show that the Terrain test can be used as a test for stereovision, and thresholds are measureable for most observers in the low shading condition. However, as shading is intensified, a large proportion of observers (30%) exhibit a strong convexity bias, resulting in reversals in perceived depth. Although the test is a promising measure of stereo, the bias tends to erode the usefulness in this regard. Currently, our analyses show weak correlation between thresholds obtained using our Terrain test and the other stereoacuity tests. However, this is possibly due to the narrow range of, primarily low, thresholds in this set of observers and additional testing with individuals with a broader range of stereoscopic ability is required. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Perspectives on the definition of visually lossless for mobile and large format displays.\n \n \n \n \n\n\n \n Allison, R. S., Brunnström, K., Chandler, D. M., Colett, H., Corriveau, P., Daly, S., Goel, J., Knopf, J., Wilcox, L. M., Yaacob, Y., Yang, S., & Zhang, Y.\n\n\n \n\n\n\n Journal of Electronic Imaging, 27(5): 053035. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Perspectives-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2018fr,\n\tabstract = {Advances in imaging and display engineering have given rise to improved and new image and video applications which aim to maximize visual quality under given resource constraints (e.g., power, bandwidth). Because the human visual system is an imperfect sensor, the images/videos can be represented in a mathematically lossy fashion, but with enough fidelity that the losses are visually imperceptible---commonly termed ``visually lossless.'' Although, a great deal of research has focused on gaining a better understanding of the limits of human vision when viewing natural images/video, a universally or even largely accepted definition of ``visually lossless'' remains elusive. Differences in testing methodologies, research objectives, and target applications have led to multiple ad-hoc definitions that are often difficult to compare to or otherwise employ in other settings. In this paper, we present a compendium of technical experiments relating to both vision science and visual quality testing that together explore the research and business perspectives of visually lossless image quality, as well as review recent scientific advances. Together, the studies presented in this paper suggest that a single definition of visually lossless quality might not be appropriate; rather, a better goal would be to establish varying levels of visually lossless quality that can be quantified in terms of the testing paradigm. },\n\tannote = {Manuscript Title: \nAuthors: Robert Allison, Kjell Brunnstrom, Damon Chandler, Hannah Colett, Philip Corriveau, Scott Daly, James Goel, Juliana Knopf, Laurie Wilcox, Yusizwan Yaacob, Shun-nan Yang, and Yi Zhang\nPaper Number: JEI 170771P\n},\n\tauthor = {Allison, R. S. and Brunnstr\\"om, K. and Chandler, D. M. and Colett, H. and Corriveau, P. and Daly, S. and Goel, J. and Knopf, J. and Wilcox, L. M. and Yaacob, Y. and Yang, S. and Zhang, Y.},\n\tdate-added = {2018-09-12 14:13:00 +0000},\n\tdate-modified = {2019-02-03 08:55:51 -0500},\n\tdoi = {10.1117/1.JEI.27.5.053035},\n\tjournal = {Journal of Electronic Imaging},\n\tkeywords = {Image Quality},\n\tnumber = {5},\n\tpages = {053035},\n\ttitle = {Perspectives on the definition of visually lossless for mobile and large format displays},\n\turl-1 = {https://doi.org/10.1117/1.JEI.27.5.053035},\n\tvolume = {27},\n\tyear = {2018},\n\turl-1 = {https://doi.org/10.1117/1.JEI.27.5.053035}}\n\n
\n
\n\n\n
\n Advances in imaging and display engineering have given rise to improved and new image and video applications which aim to maximize visual quality under given resource constraints (e.g., power, bandwidth). Because the human visual system is an imperfect sensor, the images/videos can be represented in a mathematically lossy fashion, but with enough fidelity that the losses are visually imperceptible—commonly termed ``visually lossless.'' Although, a great deal of research has focused on gaining a better understanding of the limits of human vision when viewing natural images/video, a universally or even largely accepted definition of ``visually lossless'' remains elusive. Differences in testing methodologies, research objectives, and target applications have led to multiple ad-hoc definitions that are often difficult to compare to or otherwise employ in other settings. In this paper, we present a compendium of technical experiments relating to both vision science and visual quality testing that together explore the research and business perspectives of visually lossless image quality, as well as review recent scientific advances. Together, the studies presented in this paper suggest that a single definition of visually lossless quality might not be appropriate; rather, a better goal would be to establish varying levels of visually lossless quality that can be quantified in terms of the testing paradigm. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Smoothness of stimulus motion can affect vection strength.\n \n \n \n \n\n\n \n Fujii, Y., Seno, T, & Allison, R. S.\n\n\n \n\n\n\n Experimental Brain Research, 236(1): 243–252. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Smoothness-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Fujii:aa,\n\tauthor = {Fujii, Y. and Seno, T and Allison, R. S.},\n\tdate-added = {2017-11-04 23:09:56 +0000},\n\tdate-modified = {2018-03-19 12:55:01 +0000},\n\tdoi = {10.1007/s00221-017-5122-1},\n\tjournal = {Experimental Brain Research},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {1},\n\tpages = {243--252},\n\ttitle = {Smoothness of stimulus motion can affect vection strength},\n\turl-1 = {http://dx.doi.org/10.1007/s00221-017-5122-1},\n\tvolume = {236},\n\tyear = {2018},\n\turl-1 = {https://doi.org/10.1007/s00221-017-5122-1}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The CSA VECTION project: interpreting visual acceleration in a micro-g environment.\n \n \n \n\n\n \n Harris, L. R., Jenkin, M. R., Allison, R. S., McManus, M., & Bury, N.\n\n\n \n\n\n\n In CASI Aero 2018, Quebec, Canada May 15-17, 2018, pages 69-70. 2018.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Harris:2018aa,\n\tabstract = {Physical linear acceleration experienced in micro-g can be misinterpreted as gravity and induce \na sense of tilt (Cl\\'ement et al. 2001. EBR 138: 410). Previous studies have suggested that visual \nacceleration may be more effective at inducing self-motion (vection) in micro-g and \nfurthermore that perceived distance may be underestimated in these environments. The \nVECTION project will in the micro-g of the ISS: (1) quantify how much self-motion is evoked by \nvisual acceleration; (2) investigate whether visual acceleration can be interpreted as gravity; \nand (3) exploit size-distance invariance hypothesis to assess the perception of distance. \nForwards vection will be created using constant-acceleration (0.8m/s/s) translation down a \nvirtual corridor presented to astronauts using a head-mounted display (HMD). We will assess \nthe perceived distance of travel by asking them to indicate when they reached a previously \npresented target. In a second experiment lateral vection will be evoked by a period of sideways \nvisual acceleration. Following the experience the screen will be blanked. If this acceleration is \ninterpreted as gravity it will evoke a sense of tilt comparable to that found in the Cl\\'ement et al. \nstudy. Perceived tilt will be assessed by aligning a line with the previously viewed floor of the \nsimulated corridor. In a third experiment distance perception will be measured by asking \nastronauts to compare the size of an object presented at a known simulated distance with a \nphysical reference held in their hands. In all cases data will be compared to ground control data \ntaken before each astronaut's mission. Control data will also be collected from an age-and-\ngender-matched sample of na\\:ive, earth-bound participants tested at approximately the same \nintervals as the astronauts. \nThe on-orbit will be collected between 2018 and 2021 although ground-based testing has \nalready commenced. VECTION will significantly improve safety wherever movement under \nmicrogravity conditions is required. },\n\tauthor = {Harris, L. R. and Jenkin, M. R. and Allison, R. S. and McManus, M. and Bury, N.},\n\tbooktitle = {CASI Aero 2018, Quebec, Canada May 15-17, 2018},\n\tdate-added = {2018-09-12 14:24:37 +0000},\n\tdate-modified = {2024-03-04 22:15:38 -0500},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {69-70},\n\ttitle = {The CSA VECTION project: interpreting visual acceleration in a micro-g environment},\n\tyear = {2018}}\n\n
\n
\n\n\n
\n Physical linear acceleration experienced in micro-g can be misinterpreted as gravity and induce a sense of tilt (Clément et al. 2001. EBR 138: 410). Previous studies have suggested that visual acceleration may be more effective at inducing self-motion (vection) in micro-g and furthermore that perceived distance may be underestimated in these environments. The VECTION project will in the micro-g of the ISS: (1) quantify how much self-motion is evoked by visual acceleration; (2) investigate whether visual acceleration can be interpreted as gravity; and (3) exploit size-distance invariance hypothesis to assess the perception of distance. Forwards vection will be created using constant-acceleration (0.8m/s/s) translation down a virtual corridor presented to astronauts using a head-mounted display (HMD). We will assess the perceived distance of travel by asking them to indicate when they reached a previously presented target. In a second experiment lateral vection will be evoked by a period of sideways visual acceleration. Following the experience the screen will be blanked. If this acceleration is interpreted as gravity it will evoke a sense of tilt comparable to that found in the Clément et al. study. Perceived tilt will be assessed by aligning a line with the previously viewed floor of the simulated corridor. In a third experiment distance perception will be measured by asking astronauts to compare the size of an object presented at a known simulated distance with a physical reference held in their hands. In all cases data will be compared to ground control data taken before each astronaut's mission. Control data will also be collected from an age-and- gender-matched sample of na\\:ive, earth-bound participants tested at approximately the same intervals as the astronauts. The on-orbit will be collected between 2018 and 2021 although ground-based testing has already commenced. VECTION will significantly improve safety wherever movement under microgravity conditions is required. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Effects of Visual and Control Latency on Piloting a Quadcopter Using a Head-Mounted Display.\n \n \n \n \n\n\n \n Zhao, J., Allison, R. S., Vinnikov, M., & Jennings, S.\n\n\n \n\n\n\n In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 2972–2979, 10 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{zhao_effects_2018,\n\tabstract = {Recent research has proposed teleoperation of robotic and aerial vehicles using head motion tracked by a head-mounted display (HMD). First-person views of the vehicles are usually captured by onboard cameras and presented to users through the display panels of HMDs. This provides users with a direct, immersive and intuitive interface for viewing and control. However, a typically overlooked factor in such designs is the latency introduced by the vehicle dynamics. As head motion is coupled with visual updates in such applications, visual and control latency always exists between the issue of control commands by head movements and the visual feedback received at the completion of the attitude adjustment. This causes a discrepancy between the intended motion, the vestibular cue and the visual cue and may potentially result in simulator sickness. No research has been conducted on how various levels of visual and control latency introduced by dynamics in robots or aerial vehicles affect users' performance and the degree of simulator sickness elicited. Thus, it is uncertain how much performance is degraded by latency and whether such designs are comfortable from the perspective of users. To address these issues, we studied a prototyped scenario of a head motion controlled quadcopter using an HMD. We present a virtual reality (VR) paradigm to systematically assess the effects of visual and control latency in simulated drone control scenarios.},\n\tauthor = {Zhao, J. and Allison, R. S. and Vinnikov, M. and Jennings, S.},\n\tbooktitle = {2018 {IEEE} {International} {Conference} on {Systems}, {Man}, and {Cybernetics} ({SMC})},\n\tdate-added = {2019-02-03 08:17:58 -0500},\n\tdate-modified = {2019-04-13 15:32:08 -0400},\n\tdoi = {10.1109/SMC.2018.00505},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = 10,\n\tpages = {2972--2979},\n\ttitle = {The {Effects} of {Visual} and {Control} {Latency} on {Piloting} a {Quadcopter} {Using} a {Head}-{Mounted} {Display}},\n\turl = {https://arxiv.org/abs/1807.11123},\n\turl-1 = {https://doi.org/10.1109/SMC.2018.00505},\n\tyear = {2018},\n\turl-1 = {https://arxiv.org/abs/1807.11123},\n\turl-2 = {https://doi.org/10.1109/SMC.2018.00505}}\n\n
\n
\n\n\n
\n Recent research has proposed teleoperation of robotic and aerial vehicles using head motion tracked by a head-mounted display (HMD). First-person views of the vehicles are usually captured by onboard cameras and presented to users through the display panels of HMDs. This provides users with a direct, immersive and intuitive interface for viewing and control. However, a typically overlooked factor in such designs is the latency introduced by the vehicle dynamics. As head motion is coupled with visual updates in such applications, visual and control latency always exists between the issue of control commands by head movements and the visual feedback received at the completion of the attitude adjustment. This causes a discrepancy between the intended motion, the vestibular cue and the visual cue and may potentially result in simulator sickness. No research has been conducted on how various levels of visual and control latency introduced by dynamics in robots or aerial vehicles affect users' performance and the degree of simulator sickness elicited. Thus, it is uncertain how much performance is degraded by latency and whether such designs are comfortable from the perspective of users. To address these issues, we studied a prototyped scenario of a head motion controlled quadcopter using an HMD. We present a virtual reality (VR) paradigm to systematically assess the effects of visual and control latency in simulated drone control scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensitivity to Natural 3D Image Transformations During Eye Movements.\n \n \n \n \n\n\n \n Keyvanara, M., & Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, of ETRA 18, pages 64:1–64:5, New York, NY, USA, 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"SensitivityPaper\n  \n \n \n \"Sensitivity-1\n  \n \n \n \"Sensitivity-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Keyvanara:2018:SNI:3204493.3204583,\n\tacmid = {3204583},\n\taddress = {New York, NY, USA},\n\tauthor = {Keyvanara, Maryam and Allison, Robert S.},\n\tbooktitle = {Proceedings of the 2018 ACM Symposium on Eye Tracking Research \\& Applications},\n\tdate-added = {2018-06-30 22:18:20 +0000},\n\tdate-modified = {2023-10-27 11:10:26 -0400},\n\tdoi = {10.1145/3204493.3204583},\n\tisbn = {978-1-4503-5706-7},\n\tkeywords = {Eye Movements & Tracking, Augmented & Virtual Reality},\n\tlocation = {Warsaw, Poland},\n\tnumpages = {5},\n\tpages = {64:1--64:5},\n\tpublisher = {ACM},\n\tseries = {ETRA 18},\n\ttitle = {Sensitivity to Natural 3D Image Transformations During Eye Movements},\n\turl = {https://percept.eecs.yorku.ca/papers/Maryma ETRA_2018 preprint.pdf},\n\tyear = {2018},\n\turl-1 = {http://doi.acm.org/10.1145/3204493.3204583},\n\turl-2 = {https://doi.org/10.1145/3204493.3204583}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 85-1: Visually Lossless Compression of High Dynamic Range Images: A Large-Scale Evaluation.\n \n \n \n \n\n\n \n Sudhama, A., Cutone, M. D, Hou, Y., Goel, J., Stolitzka, D., Jacobson, N., Allison, R. S, & Wilcox, L. M\n\n\n \n\n\n\n In SID Symposium Digest of Technical Papers, volume 49, pages 1151–1154, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"85-1:-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{sudhama201885,\n\tauthor = {Sudhama, Aishwarya and Cutone, Matthew D and Hou, Yuqian and Goel, James and Stolitzka, Dale and Jacobson, Natan and Allison, Robert S and Wilcox, Laurie M},\n\tbooktitle = {SID Symposium Digest of Technical Papers},\n\tdate-modified = {2018-06-06 22:08:01 +0000},\n\tdoi = {10.1002/sdtp.12106},\n\tkeywords = {Image Quality},\n\tnumber = {1},\n\tpages = {1151--1154},\n\ttitle = {85-1: Visually Lossless Compression of High Dynamic Range Images: A Large-Scale Evaluation},\n\turl-1 = {https://doi.org/10.1002/sdtp.12106},\n\tvolume = {49},\n\tyear = {2018},\n\turl-1 = {https://doi.org/10.1002/sdtp.12106}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n P-31: A Statistical Paradigm for Assessment of Subjective Image Quality Results.\n \n \n \n \n\n\n \n Cutone, M. D, Dalecki, M., Goel, J., Wilcox, L. M, & Allison, R. S\n\n\n \n\n\n\n In SID Symposium Digest of Technical Papers, volume 49, pages 1312–1314, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"P-31:-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{cutone2018p,\n\tauthor = {Cutone, Matthew D and Dalecki, Marc and Goel, James and Wilcox, Laurie M and Allison, Robert S},\n\tbooktitle = {SID Symposium Digest of Technical Papers},\n\tdate-modified = {2018-06-06 22:08:46 +0000},\n\tdoi = {10.1002/sdtp.12154},\n\tkeywords = {Image Quality},\n\tnumber = {1},\n\tpages = {1312--1314},\n\ttitle = {P-31: A Statistical Paradigm for Assessment of Subjective Image Quality Results},\n\turl-1 = {https://doi.org/10.1002/sdtp.12154},\n\tvolume = {49},\n\tyear = {2018},\n\turl-1 = {https://doi.org/10.1002/sdtp.12154}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Gait Parameters for Locomotion in Virtual Reality Systems.\n \n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n In Wannous, H., Pala, P., Daoudi, M., & Flórez-Revuelta, F., editor(s), Understanding Human Activities Through 3D Sensors. UHA3DS 2016., volume 10188, of Lecture Notes in Computer Science, pages 59-73, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Learning-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Zhao:2016ab,\n\tabstract = {Mechanical repositioning is a locomotion technique that uses a mechanical device (i.e. locomotion interface), such as treadmills and pedaling devices, to cancel the displacement of a user for walking on the spot. This technique is especially useful for virtual reality (VR) systems that use large-scale projective displays for visualization. In this paper, we present a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, named as the Wide-Field Immersive Stereoscopic Environment (WISE). We also assessed the usability of the proposed approach through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. Our results show that participants differ in their ability to carry out the task. We provide an explanation for the variable performance of the participants based on the locomotion technique.},\n\tannote = {2nd International Workshop on Understanding Human Activities through 3D Sensors (UHA3DS'16)\nDec 4 , 2016, Mexico, Mexico},\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tbooktitle = {Understanding Human Activities Through 3D Sensors. UHA3DS 2016.},\n\tdate-added = {2016-12-04 21:58:08 +0000},\n\tdate-modified = {2018-05-25 00:29:53 +0000},\n\tdoi = {10.1007/978-3-319-91863-1_5},\n\teditor = {Hazem Wannous and Pietro Pala and Mohamed Daoudi and Francisco Fl{\\'o}rez-Revuelta},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {59-73},\n\tseries = {Lecture Notes in Computer Science},\n\ttitle = {Learning Gait Parameters for Locomotion in Virtual Reality Systems},\n\turl-1 = {https://doi.org/10.1007/978-3-319-91863-1_5},\n\tvolume = {10188},\n\tyear = {2018},\n\turl-1 = {https://doi.org/10.1007/978-3-319-91863-1_5}}\n\n
\n
\n\n\n
\n Mechanical repositioning is a locomotion technique that uses a mechanical device (i.e. locomotion interface), such as treadmills and pedaling devices, to cancel the displacement of a user for walking on the spot. This technique is especially useful for virtual reality (VR) systems that use large-scale projective displays for visualization. In this paper, we present a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, named as the Wide-Field Immersive Stereoscopic Environment (WISE). We also assessed the usability of the proposed approach through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. Our results show that participants differ in their ability to carry out the task. We provide an explanation for the variable performance of the participants based on the locomotion technique.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n ICECUBE LED Display.\n \n \n \n\n\n \n Hosale, M. D., Madsen, J., & Allison, R. S.\n\n\n \n\n\n\n Art Exhibit (Exhibition Catalog) at Colour: what do you mean by that?, 03 2018.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Hosale:2018aa,\n\tabstract = {\nICECUBE LED Display [ILDm\\^3] is a cubic-meter (1/1000th scale) model of the IceCube Neutrino Observatory.  This novel telescope looks for nearly invisible cosmic messengers, neutrinos, using a cubic-kilometer of instrumented ice starting 1450 meters below the surface at the South Pole.  ILDm\\^3 sits low to the ground on a base of wood that supports 86 acrylic rods, each with 60 vertically arranged full-colour LED's (5,160 total). Each LED is a representation of a sensor, a.k.a. Digital Optical Module (DOM), in the South Pole array. A small interactive interface is used to control the work. Spatial sonification (sound mapping) of the data enhances the representation of events on the model, allowing observers to audio-locate events, as well as see them. \n\nThis is a second, smaller version of an eight cubic meter (1:500 scale) display we developed previously. Through these projects we attempt to create a high quality experience of the information being presented with the goal of knowledge dissemination and the ability to convey an intuitive understanding and the experience of data. The models represent an epistemological nexus between art and science. While scientifically precise, the display uses art methodologies as an optimal means for expressing imperceptible astrophysical events as sound, light and colour in the domain of the human sensorium. The result is an experience that is as aesthetically critical as it is facilitatory to an intuitive understanding of sub-atomic astrophysical data, leading to new ways of knowing about our Universe and its processes.\n\nfor more info visit: \nhttp://icecube.wisc.edu/\nndstudiolab.com/projects/icecube/\n\n},\n\tauthor = {Hosale, M. D. and Madsen, J. and Allison, R. S.},\n\tdate-added = {2018-02-14 03:03:53 +0000},\n\tdate-modified = {2019-02-03 08:35:07 -0500},\n\thowpublished = {Art Exhibit (Exhibition Catalog) at Colour: what do you mean by that?},\n\tkeywords = {Misc.},\n\tmonth = {03},\n\ttitle = {ICECUBE LED Display},\n\tyear = {2018}}\n\n
\n
\n\n\n
\n ICECUBE LED Display [ILDm3̂] is a cubic-meter (1/1000th scale) model of the IceCube Neutrino Observatory. This novel telescope looks for nearly invisible cosmic messengers, neutrinos, using a cubic-kilometer of instrumented ice starting 1450 meters below the surface at the South Pole. ILDm3̂ sits low to the ground on a base of wood that supports 86 acrylic rods, each with 60 vertically arranged full-colour LED's (5,160 total). Each LED is a representation of a sensor, a.k.a. Digital Optical Module (DOM), in the South Pole array. A small interactive interface is used to control the work. Spatial sonification (sound mapping) of the data enhances the representation of events on the model, allowing observers to audio-locate events, as well as see them. This is a second, smaller version of an eight cubic meter (1:500 scale) display we developed previously. Through these projects we attempt to create a high quality experience of the information being presented with the goal of knowledge dissemination and the ability to convey an intuitive understanding and the experience of data. The models represent an epistemological nexus between art and science. While scientifically precise, the display uses art methodologies as an optimal means for expressing imperceptible astrophysical events as sound, light and colour in the domain of the human sensorium. The result is an experience that is as aesthetically critical as it is facilitatory to an intuitive understanding of sub-atomic astrophysical data, leading to new ways of knowing about our Universe and its processes. for more info visit: http://icecube.wisc.edu/ ndstudiolab.com/projects/icecube/ \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Cortical Correlates of the Simulated Viewpoint Oscillation Advantage for Vection.\n \n \n \n \n\n\n \n Kirollos, R., Allison, R. S., & Palmisano, S. A.\n\n\n \n\n\n\n Multisensory Research, 30(7-8): 739-761. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Cortical-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Kirollos:sf,\n\tabstract = {Behavioural studies have consistently found stronger vection responses for oscillating, compared\nto smooth/constant, patterns of radial flow (the simulated viewpoint oscillation advantage for vection).\nTraditional accounts predict that simulated viewpoint oscillation should impair vection by\nincreasing visual--vestibular conflicts in stationary observers (as this visual oscillation simulates selfaccelerations\nthat should strongly stimulate the vestibular apparatus). However, support for increased\nvestibular activity during accelerating vection has been mixed in the brain imaging literature. This\nfMRI study examined BOLD activity in visual (cingulate sulcus visual area --- CSv; medial temporal\ncomplex --- MT+; V6; precuneus motion area --- PcM) and vestibular regions (parieto-insular\nvestibular cortex --- PIVC/posterior insular cortex --- PIC; ventral intraparietal region --- VIP) when\nstationary observers were exposed to vection-inducing optic flow (i.e., globally coherent oscillating\nand smooth self-motion displays) as well as two suitable control displays. In line with earlier studies\nin which no vection occurred, CSv and PIVC/PIC both showed significantly increased BOLD activity\nduring oscillating global motion compared to the other motion conditions (although this effect was\nfound for fewer subjects in PIVC/PIC). The increase in BOLD activity in PIVC/PIC during prolonged\nexposure to the oscillating (compared to smooth) patterns of global optical flow appears consistent\nwith vestibular facilitation.\n},\n\tauthor = {Kirollos, R. and Allison, R. S. and Palmisano, S. A.},\n\tdate-added = {2017-06-26 22:02:24 +0000},\n\tdate-modified = {2018-01-02 15:42:33 +0000},\n\tdoi = {10.1163/22134808-00002593},\n\tjournal = {Multisensory Research},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {7-8},\n\tpages = {739-761},\n\ttitle = {Cortical Correlates of the Simulated Viewpoint Oscillation Advantage for Vection},\n\turl-1 = {http://dx.doi.org/10.1163/22134808-00002593},\n\tvolume = {30},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1163/22134808-00002593}}\n\n
\n
\n\n\n
\n Behavioural studies have consistently found stronger vection responses for oscillating, compared to smooth/constant, patterns of radial flow (the simulated viewpoint oscillation advantage for vection). Traditional accounts predict that simulated viewpoint oscillation should impair vection by increasing visual–vestibular conflicts in stationary observers (as this visual oscillation simulates selfaccelerations that should strongly stimulate the vestibular apparatus). However, support for increased vestibular activity during accelerating vection has been mixed in the brain imaging literature. This fMRI study examined BOLD activity in visual (cingulate sulcus visual area — CSv; medial temporal complex — MT+; V6; precuneus motion area — PcM) and vestibular regions (parieto-insular vestibular cortex — PIVC/posterior insular cortex — PIC; ventral intraparietal region — VIP) when stationary observers were exposed to vection-inducing optic flow (i.e., globally coherent oscillating and smooth self-motion displays) as well as two suitable control displays. In line with earlier studies in which no vection occurred, CSv and PIVC/PIC both showed significantly increased BOLD activity during oscillating global motion compared to the other motion conditions (although this effect was found for fewer subjects in PIVC/PIC). The increase in BOLD activity in PIVC/PIC during prolonged exposure to the oscillating (compared to smooth) patterns of global optical flow appears consistent with vestibular facilitation. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Designing for the exceptional user: Nonhuman animal-computer interaction (ACI).\n \n \n \n \n\n\n \n Ritvo, S. E., & Allison, R. S.\n\n\n \n\n\n\n Computers in Human Behavior, 70: 222-233. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Designing-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Ritvo:hc,\n\tabstract = {Given the increasing variety, affordability and accessibility of sophisticated computer technologies, particularly webcam technologies, touchscreen interfaces, and wearables, there has been a marked increase in the application of such technology to enrich the lives of nonhuman animals (NHAs) and to study their cognitive abilities. However, the anthropocentric design of current computer systems is a barrier for successful adoption of such technologies by NHA users. NHA factors have not driven the design of the majority of computer technologies that has, or could be applied to this user population. This paper explores (a) how human-computer interaction (HCI) principles may apply (or not apply) to NHA-computer interaction (ACI), (b) how principles and computer system designs exclusive for ACI may be developed, and (c) how NHA-centered computer designs may benefit HCI and its user population.},\n\tauthor = {Ritvo, S. E. and Allison, R. S.},\n\tdate-added = {2016-12-30 04:24:15 +0000},\n\tdate-modified = {2017-04-24 21:22:44 +0000},\n\tdoi = {10.1016/j.chb.2016.12.062},\n\tjournal = {Computers in Human Behavior},\n\tkeywords = {Misc.},\n\tpages = {222-233},\n\ttitle = {Designing for the exceptional user: Nonhuman animal-computer interaction ({ACI})},\n\turl-1 = {http://dx.doi.org/10.1016/j.chb.2016.12.062},\n\tvolume = {70},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1016/j.chb.2016.12.062}}\n\n
\n
\n\n\n
\n Given the increasing variety, affordability and accessibility of sophisticated computer technologies, particularly webcam technologies, touchscreen interfaces, and wearables, there has been a marked increase in the application of such technology to enrich the lives of nonhuman animals (NHAs) and to study their cognitive abilities. However, the anthropocentric design of current computer systems is a barrier for successful adoption of such technologies by NHA users. NHA factors have not driven the design of the majority of computer technologies that has, or could be applied to this user population. This paper explores (a) how human-computer interaction (HCI) principles may apply (or not apply) to NHA-computer interaction (ACI), (b) how principles and computer system designs exclusive for ACI may be developed, and (c) how NHA-centered computer designs may benefit HCI and its user population.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaze-contingent Auditory Displays for Improved Spatial Attention.\n \n \n \n \n\n\n \n Vinnikov, M., Allison, R. S., & Fernandes, S.\n\n\n \n\n\n\n ACM TOCHI, 24(3): 19.1-19.38. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Gaze-contingent-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Vinnikov:sf,\n\tabstract = {Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.},\n\tauthor = {Vinnikov, M. and Allison, R. S. and Fernandes, S.},\n\tbooktitle = {ACM CHI},\n\tdate-added = {2016-12-30 04:23:15 +0000},\n\tdate-modified = {2017-05-08 02:05:25 +0000},\n\tdoi = {10.1145/3067822},\n\tjournal = {{ACM TOCHI}},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {3},\n\tpages = {19.1-19.38},\n\ttitle = {Gaze-contingent Auditory Displays for Improved Spatial Attention},\n\turl-1 = {http://dx.doi.org/10.1145/3067822},\n\tvolume = {24},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1145/3067822}}\n\n
\n
\n\n\n
\n Virtual reality simulations of group social interactions are important for many applications including the virtual treatment of social phobias, crowd and group simulation, collaborative virtual environments and entertainment. In such scenarios, when compared to the real world audio cues are often impoverished. As a result, users cannot rely on subtle spatial audio-visual cues that guide attention and enable effective social interactions in real world situations. We explored whether gaze-contingent audio enhancement techniques driven by inferring audio-visual attention in virtual displays could be used to enable effective communication in cluttered audio virtual environments. In all of our experiments, we hypothesized that visual attention could be used as a tool to modulate the quality and intensity of sounds from multiple sources to efficiently and naturally select spatial sound sources. For this purpose, we built a gaze-contingent display that allowed tracking of a user's gaze in real-time and modifying the volume of the speakers' voices contingent on the current region of overt attention. We compared six different techniques for sound modulation with a base condition providing no attentional modulation of sound. The techniques were compared in terms of source recognition and preference in a set of user studies. Overall, we observed that users liked the ability to control the sounds with their eyes. They felt that a rapid change in attenuation with attention but not the elimination of competing sounds (partial rather than absolute selection) was most natural. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments. They should be considered for improving both performance and fidelity in applications related to social behaviour scenarios or when the user needs to work with multiple audio sources of information.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (13)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effects of motion picture frame rate on image quality.\n \n \n \n \n\n\n \n Allison, R. S., Fujii, Y., & Wilcox, L. M.\n\n\n \n\n\n\n In Proceedings of ECVP 2017, Perception, pages 111. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n \n \"Effects-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:vn,\n\tabstract = {Modern digital cinema supports much higher frame rates (HFR) than the traditional 24 frames per second (fps). Theoretically, higher fidelity should allow viewers to see more detail. We filmed image sequences of a male and female actor (in different costume) at all combinations of two resolutions (2k and 4k), three frame rates (24, 48 and 60 fps), and two shutter angles (180\\degree and 358\\degree ).  We asked viewers (N = 26) to watch 20-s movie clips and to rate (1) the image sharpness and (2) the quality of the motion.\n\nMotion quality and image sharpness ratings improved with increasing frame rate, especially from 24 to 48 fps. The ratings of sharpness for 180\\degree shutter angle were higher than for 358\\degree , consistent with the expectation of more motion blur in the latter. The benefit of higher resolution depended on frame rate: at 24 fps, ratings of sharpness for the 4k sequences were similar to, or even lower than, ratings for the 2k sequences. We propose that motion blur was more apparent in the low frame rate 4k imagery because it could be compared with high resolution, static, portions of the same image.\n\nOur results show that na\\:ive observers perceive enhanced detail in moving fabrics and costumes in HFR film. This improved perception of detail could underlie both the positive and negative reactions to HFR film, depending on the nature of the content and whether it lends itself to such high fidelity. \n\n},\n\tannote = {ECVP 2017 Berlin, Germany\n\n Poster/Talk presented at the European Conference on Visual Perception 2017, Berlin, Germany. Retrieved from URL: http://journals.sagepub.com/page/pec/collections/ecvp-abstracts/index/ecvp-2017 },\n\tauthor = {Allison, R. S. and Fujii, Y. and Wilcox, L. M.},\n\tbooktitle = {Proceedings of ECVP 2017, Perception},\n\tdate-added = {2018-11-27 12:57:16 -0500},\n\tdate-modified = {2019-02-03 09:00:21 -0500},\n\tkeywords = {Image Quality},\n\tpages = {111},\n\ttitle = {Effects of motion picture frame rate on image quality},\n\turl = {http://journals.sagepub.com/page/pec/collections/ecvp-abstracts/index/ecvp-2017},\n\turl-1 = {http://journals.sagepub.com/page/pec/collections/ecvp-abstracts/index/ecvp-2017},\n\tyear = {2017},\n\turl-1 = {http://journals.sagepub.com/page/pec/collections/ecvp-abstracts/index/ecvp-2017}}\n\n
\n
\n\n\n
\n Modern digital cinema supports much higher frame rates (HFR) than the traditional 24 frames per second (fps). Theoretically, higher fidelity should allow viewers to see more detail. We filmed image sequences of a male and female actor (in different costume) at all combinations of two resolutions (2k and 4k), three frame rates (24, 48 and 60 fps), and two shutter angles (180\\degree and 358\\degree ). We asked viewers (N = 26) to watch 20-s movie clips and to rate (1) the image sharpness and (2) the quality of the motion. Motion quality and image sharpness ratings improved with increasing frame rate, especially from 24 to 48 fps. The ratings of sharpness for 180\\degree shutter angle were higher than for 358\\degree , consistent with the expectation of more motion blur in the latter. The benefit of higher resolution depended on frame rate: at 24 fps, ratings of sharpness for the 4k sequences were similar to, or even lower than, ratings for the 2k sequences. We propose that motion blur was more apparent in the low frame rate 4k imagery because it could be compared with high resolution, static, portions of the same image. Our results show that na\\:ive observers perceive enhanced detail in moving fabrics and costumes in HFR film. This improved perception of detail could underlie both the positive and negative reactions to HFR film, depending on the nature of the content and whether it lends itself to such high fidelity. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effect of frame rate on vection strength.\n \n \n \n\n\n \n Fujii, Y., Seno, T., & Allison, R. S.\n\n\n \n\n\n\n In Fechner Day 2017 Conference Proceedings, The 33rd Annual Meeting of the International Society for Psychophysics, 22-26 October 2017, Fukuoka, Japan, volume 33, pages 242. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fujii:yb,\n\tabstract = {We examined effect of frame rate of optical flow on vection strength. Downward (Experiment 1) and expanding (Experiment 2) grating movies were used as stimuli to induce vection. Frame rates were controlled in seven conditions (3, 4, 6, 12, 20, 30 and 60), and vection strength were measured with three indices (latency of vection onset, total duration time and subjective magnitude). We hypothesized that higher frame rate should induce stronger vection because low frame rate cause artifacts such as judder and motion blur. The results of both experiment clearly showed that vection strength increased with increasing frame rate, however, the rate of increase were not constant and saturated in the high range.},\n\tannote = {Fukuoka},\n\tauthor = {Fujii, Y. and Seno, T. and Allison, R. S.},\n\tbooktitle = {Fechner Day 2017 Conference Proceedings, The 33rd Annual Meeting of the International Society for Psychophysics, 22-26 October 2017, Fukuoka, Japan},\n\tdate-added = {2018-09-18 10:09:52 -0400},\n\tdate-modified = {2018-09-18 10:09:52 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {242},\n\ttitle = {Effect of frame rate on vection strength},\n\tvolume = {33},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n We examined effect of frame rate of optical flow on vection strength. Downward (Experiment 1) and expanding (Experiment 2) grating movies were used as stimuli to induce vection. Frame rates were controlled in seven conditions (3, 4, 6, 12, 20, 30 and 60), and vection strength were measured with three indices (latency of vection onset, total duration time and subjective magnitude). We hypothesized that higher frame rate should induce stronger vection because low frame rate cause artifacts such as judder and motion blur. The results of both experiment clearly showed that vection strength increased with increasing frame rate, however, the rate of increase were not constant and saturated in the high range.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceived Vection and Postural Sway: A Behavioural Response to Virtual Reality.\n \n \n \n \n\n\n \n Kio, O. G., Fujii, Y., Wilcox, L. M., Au, D., & Allison, R. S.\n\n\n \n\n\n\n In 6th International Conference on Visually induced Motion Sensations, pages 13. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"PerceivedPaper\n  \n \n \n \"Perceived-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kio:aa,\n\tabstract = {The quality of stereoscopic 3D content is a major determinant for immersion and user experience in virtual reality (VR).  Thus it is important that the effectiveness of stereoscopic 3D content parameters be assessed behaviourally. A typical behavioural response to VR is vection, the visually - induced perception of self - motion elicited by moving scenes. In this work  we investigate how participants' vection and postural sway vary with the simulated optical flow speed and the  virtual camera' s frame rate and  exposure time while viewing depictions of movement through a realistic virtual  environment. We compare the degree of postural sway obtained from the centre - of - pressure data of a Nintendo Wii  Balance Board with subjective vection scores. Results obtained from this study show that average perceived vection increases with increase in frame rate and simulated speed but not with exposure time. We also found that perceived vection in VR does not induce significant postural sway in typical 3D cinema scenarios. We are currently conducting  experiments to confirm whether this finding holds for immersive virtual reality scenarios where screen edge and other  surround cues are eliminated.  },\n\tannote = {6th International Conference on\nVISUALLY INDUCED MOTION SENSATIONS\nNov 16\n-\n17, 2017\nToronto, Canada},\n\tauthor = {Kio, O. G. and Fujii, Y. and Wilcox, L. M. and Au, D. and Allison, R. S.},\n\tbooktitle = {6th International Conference on Visually induced Motion Sensations},\n\tdate-added = {2018-04-22 12:33:00 +0000},\n\tdate-modified = {2018-04-22 12:33:00 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {13},\n\ttitle = {Perceived Vection and Postural Sway: A Behavioural Response to Virtual Reality},\n\turl = {https://vims2017.org/program-1},\n\turl-1 = {https://vims2017.org/program-1},\n\tyear = {2017},\n\turl-1 = {https://vims2017.org/program-1}}\n\n
\n
\n\n\n
\n The quality of stereoscopic 3D content is a major determinant for immersion and user experience in virtual reality (VR). Thus it is important that the effectiveness of stereoscopic 3D content parameters be assessed behaviourally. A typical behavioural response to VR is vection, the visually - induced perception of self - motion elicited by moving scenes. In this work we investigate how participants' vection and postural sway vary with the simulated optical flow speed and the virtual camera' s frame rate and exposure time while viewing depictions of movement through a realistic virtual environment. We compare the degree of postural sway obtained from the centre - of - pressure data of a Nintendo Wii Balance Board with subjective vection scores. Results obtained from this study show that average perceived vection increases with increase in frame rate and simulated speed but not with exposure time. We also found that perceived vection in VR does not induce significant postural sway in typical 3D cinema scenarios. We are currently conducting experiments to confirm whether this finding holds for immersive virtual reality scenarios where screen edge and other surround cues are eliminated. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of motion picture frame rate on material and texture appearance.\n \n \n \n \n\n\n \n Allison, R. S., Fujii, Y., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 17, pages 418. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:aa,\n\tauthor = {Allison, R. S. and Fujii, Y. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2017-09-04 13:17:48 +0000},\n\tdate-modified = {2017-11-27 01:07:18 +0000},\n\tdoi = {10.1167/17.10.418},\n\tkeywords = {Image Quality},\n\tnumber = {10},\n\tpages = {418},\n\ttitle = {Effects of motion picture frame rate on material and texture appearance},\n\turl-1 = {http://dx.doi.org/10.1167/17.10.418%20},\n\tvolume = {17},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1167/17.10.418}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Does Task-Specific Experience Improve Altitude Estimation in Virtual Environments?.\n \n \n \n\n\n \n Hartle, B., Deas, L. M., Allison, R. S., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hartle:2017ab,\n\tabstract = {The potential advantages of stereopsis for aircrew has long been an interest in aviation, but there is no clear consensus on its impact. One potential reason for this is the redundancy of monocular and binocular sources of depth information in natural environments. In a previous study, we assessed the impact of stereopsis on distance judgements during simulated helicopter low hover with the observer looking out of a helicopter, past the skid, to the ground plane. Specifically, we assessed relative distance estimation under stereoscopic (S3D) and monocular viewing conditions. We varied the availability of monocular depth cues by rendering the ground with four types of terrain. On each trial observers (n=14) were asked to estimate the distance between the skid and the ground plane relative to the distance from themselves to the skid. Our results showed that performance was more accurate in S3D than monocular conditions. Furthermore, under monocular viewing conditions, observers scaled their estimates with distance, but tended to strongly underestimate relative to predictions. These results support the hypothesis that stereopsis facilitates judgement of relative distance in low hover.\nHowever, the above results could reflect the fact that our observers were inexperienced. Task-specific training that aircrew receive could diminish potential benefits of stereopsis. In the current study, we replicated the previous experiment using trained aircrew (n=32). Aircrew demonstrated higher accuracy in the monocular conditions relative to inexperienced observers, though the benefit of binocular viewing was still observed. Interestingly, when monocular information was unreliable judgements made by aircrew were less precise than conditions with reliable monocular cues. Overall, the presence of stereopsis improved accuracy of relative distance judgements in low hover for inexperienced observers and trained aircrew. Despite higher accuracy in monocular conditions for aircrew, their estimates were affected by unreliable monocular information while inexperienced observers were not. These results highlight the impact of task-specific training on the accuracy of depth judgements in rotary wing flight operations. \n},\n\tannote = {York June 2017},\n\tauthor = {Hartle, B. and Deas, L. M. and Allison, R. S. and Irving, E. L. and Glaholt, M. and Wilcox, L. M.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-09-03 18:22:12 +0000},\n\tdate-modified = {2017-09-03 18:22:39 +0000},\n\tkeywords = {Stereopsis},\n\ttitle = {Does Task-Specific Experience Improve Altitude Estimation in Virtual Environments?},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n The potential advantages of stereopsis for aircrew has long been an interest in aviation, but there is no clear consensus on its impact. One potential reason for this is the redundancy of monocular and binocular sources of depth information in natural environments. In a previous study, we assessed the impact of stereopsis on distance judgements during simulated helicopter low hover with the observer looking out of a helicopter, past the skid, to the ground plane. Specifically, we assessed relative distance estimation under stereoscopic (S3D) and monocular viewing conditions. We varied the availability of monocular depth cues by rendering the ground with four types of terrain. On each trial observers (n=14) were asked to estimate the distance between the skid and the ground plane relative to the distance from themselves to the skid. Our results showed that performance was more accurate in S3D than monocular conditions. Furthermore, under monocular viewing conditions, observers scaled their estimates with distance, but tended to strongly underestimate relative to predictions. These results support the hypothesis that stereopsis facilitates judgement of relative distance in low hover. However, the above results could reflect the fact that our observers were inexperienced. Task-specific training that aircrew receive could diminish potential benefits of stereopsis. In the current study, we replicated the previous experiment using trained aircrew (n=32). Aircrew demonstrated higher accuracy in the monocular conditions relative to inexperienced observers, though the benefit of binocular viewing was still observed. Interestingly, when monocular information was unreliable judgements made by aircrew were less precise than conditions with reliable monocular cues. Overall, the presence of stereopsis improved accuracy of relative distance judgements in low hover for inexperienced observers and trained aircrew. Despite higher accuracy in monocular conditions for aircrew, their estimates were affected by unreliable monocular information while inexperienced observers were not. These results highlight the impact of task-specific training on the accuracy of depth judgements in rotary wing flight operations. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The contribution of monocular and binocular cues to altitude estimation in aircrew.\n \n \n \n \n\n\n \n Hartle, B., Deas, L. M., Allison, R. S., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (2017 OSA Fall Vision Meeting Annual Meeting Abstracts), volume 17, pages 42. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hartle:2017aa,\n\tabstract = {The contribution of stereopsis to aviation has long been a topic of interest, but there is no consensus on its impact. This is in part due to the diversity of methodologies and tasks used, but also reflects the availability of monocular depth cues that can support altitude estimation. Here, we evaluated the contribution of monocular and binocular depth cues to altitude estimation in a simulated low hover task commonly performed by helicopter aircrew. Using a stereoscopic display, trained aircrew (n=32) estimated the altitude from a skid to the ground plane under stereoscopic and monocular viewing conditions. The ground plane was rendered with four textures at a range of altitudes. Altitude estimation was more accurate in stereoscopic than monocular conditions. Under monocular viewing, observers scaled their estimates with distance, but substantially underestimated the amount of depth. Comparison of these results with those obtained using na{\\"\\i}ve observers (Deas et al., in press) showed that the aircrew were more accurate in monocular test conditions than n{a\\:i}ve observers, but the performance of both groups was significantly improved when stereoscopic depth information was available. This pattern of results suggests that while aircrew can learn to capitalize on monocular depth cues for specific in-flight tasks, stereopsis makes a substantial contribution to operational performance for rotary wing altitude estimation.},\n\tannote = {2017 OSA FVM Annual Meeting in Washington, DC\n\nHartle, B abd Deas1, deas.lesley@gmail.com\nRobert S. Allison2, allison@cse.yorku.ca\nElizabeth L. Irving3, Elizabeth.irving@uwaterloo.ca\nMackenzie Glaholt4, mackenzie.glaholt@drdc-rddc.gc.ca\nLaurie M. Wilcox1, lwilcox@yorku.ca\n},\n\tauthor = {Hartle, B. and Deas, L. M. and Allison, R. S. and Irving, E. L. and Glaholt, M. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (2017 OSA Fall Vision Meeting Annual Meeting Abstracts)},\n\tdate-added = {2017-09-03 18:16:36 +0000},\n\tdate-modified = {2019-02-03 09:10:06 -0500},\n\tdoi = {10.1167/17.15.42},\n\tkeywords = {Stereopsis},\n\tnumber = {15},\n\tpages = {42},\n\ttitle = {The contribution of monocular and binocular cues to altitude estimation in aircrew},\n\turl = {http://jov.arvojournals.org/article.aspx?articleid=2667401},\n\turl-1 = {http://jov.arvojournals.org/article.aspx?articleid=2667401},\n\turl-2 = {https://dx.doi.org/10.1167/17.15.42},\n\tvolume = {17},\n\tyear = {2017},\n\turl-1 = {http://jov.arvojournals.org/article.aspx?articleid=2667401},\n\turl-2 = {https://doi.org/10.1167/17.15.42}}\n\n
\n
\n\n\n
\n The contribution of stereopsis to aviation has long been a topic of interest, but there is no consensus on its impact. This is in part due to the diversity of methodologies and tasks used, but also reflects the availability of monocular depth cues that can support altitude estimation. Here, we evaluated the contribution of monocular and binocular depth cues to altitude estimation in a simulated low hover task commonly performed by helicopter aircrew. Using a stereoscopic display, trained aircrew (n=32) estimated the altitude from a skid to the ground plane under stereoscopic and monocular viewing conditions. The ground plane was rendered with four textures at a range of altitudes. Altitude estimation was more accurate in stereoscopic than monocular conditions. Under monocular viewing, observers scaled their estimates with distance, but substantially underestimated the amount of depth. Comparison of these results with those obtained using naı̈ve observers (Deas et al., in press) showed that the aircrew were more accurate in monocular test conditions than na\\:ive observers, but the performance of both groups was significantly improved when stereoscopic depth information was available. This pattern of results suggests that while aircrew can learn to capitalize on monocular depth cues for specific in-flight tasks, stereopsis makes a substantial contribution to operational performance for rotary wing altitude estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The impact of stereoscopic 3d depth cues on distance estimation in a simulated low hover scenario.\n \n \n \n\n\n \n Deas, L., Allison, R. S., Hartle, H., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n In Aerospace Medical Association Annual Scientific Meeting, Aerospace Medicine and Human Performance, Vol. 88, No. 3 March 2017, volume 88, pages 259-260. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Deas:fk,\n\tabstract = {INTRODUCTION: Stereopsis is the ability to perceive depth based on binocular disparity and is believed to be important for certain aircrew tasks. For example, functional stereoscopic vision has been shown to provide an advantage for boom operators during certain aerial refueling scenarios. We propose that stereopsis will aid depth estimation in rotary-wing hover maneuvers. To test this hypothesis, we assessed performance on a distance estimation task  under stereoscopic (S3D) and monocular (2D) viewing conditions. \nMETHODS: Four types of S3D still images (3 terrains, one control pattern without 2D cues) were simulated from the point of view of a Flight Engineer looking downward (45deg) out a helicopter door. The end of a helicopter skid was visible and provided a consistent reference point in all images. Test altitudes from the skid to the ground ranged from 0-5ft with a 2'' step size. Observers (n=14, 7 female) estimated the distance between the skid and the ground. To do this, they assigned a value to represent the distance between their head position and the skid, and judged the distance from the skid to the ground relative to that value. All observers participated in S3D and 2D viewing conditions. Normalized data was analyzed using a linear mixed-effects model with full maximum-likelihood estimation methods. \nRESULTS: Estimates of relative distance were significantly affected by the viewing mode: performance was significantly more accurate in the S3D than in the 2D conditions. When terrains were viewed monocularly, observers did scale their estimates with distance, but were well below expected values. \nDISCUSSION: These results support the hypothesis that stereopsis facilitates judgements of \nrelative distance in simulated low hover scenarios. Future experiments will determine if the advantage afforded by stereopsis remains at larger distances (high hover), and if it is maintained when additional 2D information (e.g. relative size) is available.\nLearning Objectives: \n1.      The participant will learn about operational requirements for stereo-scopic depth perception in the context of rotary wing operations.},\n\tannote = {Denver CO - May 1-4, 2017},\n\tauthor = {Deas, L. and Allison, R. S. and Hartle, H. and Irving, E. L. and Glaholt, M. and Wilcox, L. M.},\n\tbooktitle = {Aerospace Medical Association Annual Scientific Meeting, Aerospace Medicine and Human Performance, Vol. 88, No. 3 March 2017},\n\tdate-added = {2017-06-08 17:45:55 +0000},\n\tdate-modified = {2018-11-25 13:29:33 -0500},\n\tkeywords = {Stereopsis},\n\tnumber = {3},\n\tpages = {259-260},\n\ttitle = {The impact of stereoscopic 3d depth cues on distance estimation in a simulated low hover scenario},\n\tvolume = {88},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n INTRODUCTION: Stereopsis is the ability to perceive depth based on binocular disparity and is believed to be important for certain aircrew tasks. For example, functional stereoscopic vision has been shown to provide an advantage for boom operators during certain aerial refueling scenarios. We propose that stereopsis will aid depth estimation in rotary-wing hover maneuvers. To test this hypothesis, we assessed performance on a distance estimation task under stereoscopic (S3D) and monocular (2D) viewing conditions. METHODS: Four types of S3D still images (3 terrains, one control pattern without 2D cues) were simulated from the point of view of a Flight Engineer looking downward (45deg) out a helicopter door. The end of a helicopter skid was visible and provided a consistent reference point in all images. Test altitudes from the skid to the ground ranged from 0-5ft with a 2'' step size. Observers (n=14, 7 female) estimated the distance between the skid and the ground. To do this, they assigned a value to represent the distance between their head position and the skid, and judged the distance from the skid to the ground relative to that value. All observers participated in S3D and 2D viewing conditions. Normalized data was analyzed using a linear mixed-effects model with full maximum-likelihood estimation methods. RESULTS: Estimates of relative distance were significantly affected by the viewing mode: performance was significantly more accurate in the S3D than in the 2D conditions. When terrains were viewed monocularly, observers did scale their estimates with distance, but were well below expected values. DISCUSSION: These results support the hypothesis that stereopsis facilitates judgements of relative distance in simulated low hover scenarios. Future experiments will determine if the advantage afforded by stereopsis remains at larger distances (high hover), and if it is maintained when additional 2D information (e.g. relative size) is available. Learning Objectives: 1. The participant will learn about operational requirements for stereo-scopic depth perception in the context of rotary wing operations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Statistical procedure for assessing the relative performance of codecs using the flicker paradigm.\n \n \n \n\n\n \n Cutone, M. D., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World, pages 17. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Cutone:2017aa,\n\tabstract = {The transmission of digital image content frequently involves some form of compression to reduce the demand and complexity of the communication medium. Image data is compressed via a codec, which removes information that is either redundant or largely imperceptible to reduce the bit-rate required to transmit the image to the target device. In so-called `lossy' compression, data from the original image signal cannot be completely recovered upon decoding, which can produce perceptible artifacts or noise. Psychophysical methods exist to assess artefact perceptibility and subjective preference following compression. A current industry standard (ISO/IEC-29170-2 Annex B) specifies a two-alternative forced choice procedure to measure artefact visibility.  In this protocol, two versions of the same image are presented side-by-side on a display. In one location an original (reference) and compressed image are temporally interleaved, while in the other location the original is presented repeatedly. Detectable differences between the original and compressed images will appear as localized flicker and observers are asked to indicate which of the images appears to flicker.  The recommended statistical procedures outlined in the standards document are descriptive and do not assess the relative performance between codecs. Here, we describe a statistical procedure that can be used to evaluate the relative performance of different codecs based on the ISO/IEC protocol results. },\n\tannote = {Toronto June 2017},\n\tauthor = {Cutone, M. D. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-06-08 17:40:28 +0000},\n\tdate-modified = {2017-09-03 18:24:59 +0000},\n\tkeywords = {Image Quality},\n\tpages = {17},\n\ttitle = {Statistical procedure for assessing the relative performance of codecs using the flicker paradigm},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n The transmission of digital image content frequently involves some form of compression to reduce the demand and complexity of the communication medium. Image data is compressed via a codec, which removes information that is either redundant or largely imperceptible to reduce the bit-rate required to transmit the image to the target device. In so-called `lossy' compression, data from the original image signal cannot be completely recovered upon decoding, which can produce perceptible artifacts or noise. Psychophysical methods exist to assess artefact perceptibility and subjective preference following compression. A current industry standard (ISO/IEC-29170-2 Annex B) specifies a two-alternative forced choice procedure to measure artefact visibility. In this protocol, two versions of the same image are presented side-by-side on a display. In one location an original (reference) and compressed image are temporally interleaved, while in the other location the original is presented repeatedly. Detectable differences between the original and compressed images will appear as localized flicker and observers are asked to indicate which of the images appears to flicker. The recommended statistical procedures outlined in the standards document are descriptive and do not assess the relative performance between codecs. Here, we describe a statistical procedure that can be used to evaluate the relative performance of different codecs based on the ISO/IEC protocol results. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Psychophysical response to virtual reality: Vection and postural sway.\n \n \n \n\n\n \n Kio, O. G., Fuji, Y., Wilcox, L. M., Au, D., & Allison, R. S.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World, pages 24. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kio:2017wf,\n\tabstract = {The potential advantages of stereopsis for aircrew has long been an interest in aviation, but there is no clear consensus on its impact. One potential reason for this is the redundancy of monocular and binocular sources of depth information in natural environments. In a previous study, we assessed the impact of stereopsis on distance judgements during simulated helicopter low hover with the observer looking out of a helicopter, past the skid, to the ground plane. Specifically, we assessed relative distance estimation under stereoscopic (S3D) and monocular viewing conditions. We varied the availability of monocular depth cues by rendering the ground with four types of terrain. On each trial observers (n=14) were asked to estimate the distance between the skid and the ground plane relative to the distance from themselves to the skid. Our results showed that performance was more accurate in S3D than monocular conditions. Furthermore, under monocular viewing conditions, observers scaled their estimates with distance, but tended to strongly underestimate relative to predictions. These results support the hypothesis that stereopsis facilitates judgement of relative distance in low hover.\nHowever, the above results could reflect the fact that our observers were inexperienced. Task-specific training that aircrew receive could diminish potential benefits of stereopsis. In the current study, we replicated the previous experiment using trained aircrew (n=32). Aircrew demonstrated higher accuracy in the monocular conditions relative to inexperienced observers, though the benefit of binocular viewing was still observed. Interestingly, when monocular information was unreliable judgements made by aircrew were less precise than conditions with reliable monocular cues. Overall, the presence of stereopsis improved accuracy of relative distance judgements in low hover for inexperienced observers and trained aircrew. Despite higher accuracy in monocular conditions for aircrew, their estimates were affected by unreliable monocular information while inexperienced observers were not. These results highlight the impact of task-specific training on the accuracy of depth judgements in rotary wing flight operations. \n},\n\tannote = {Toronto June 2017},\n\tauthor = {Kio, O. G. and Fuji, Y. and Wilcox, L. M. and Au, D. and Allison, R. S.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-06-08 17:40:28 +0000},\n\tdate-modified = {2017-09-03 18:24:50 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {24},\n\ttitle = {Psychophysical response to virtual reality: Vection and postural sway},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n The potential advantages of stereopsis for aircrew has long been an interest in aviation, but there is no clear consensus on its impact. One potential reason for this is the redundancy of monocular and binocular sources of depth information in natural environments. In a previous study, we assessed the impact of stereopsis on distance judgements during simulated helicopter low hover with the observer looking out of a helicopter, past the skid, to the ground plane. Specifically, we assessed relative distance estimation under stereoscopic (S3D) and monocular viewing conditions. We varied the availability of monocular depth cues by rendering the ground with four types of terrain. On each trial observers (n=14) were asked to estimate the distance between the skid and the ground plane relative to the distance from themselves to the skid. Our results showed that performance was more accurate in S3D than monocular conditions. Furthermore, under monocular viewing conditions, observers scaled their estimates with distance, but tended to strongly underestimate relative to predictions. These results support the hypothesis that stereopsis facilitates judgement of relative distance in low hover. However, the above results could reflect the fact that our observers were inexperienced. Task-specific training that aircrew receive could diminish potential benefits of stereopsis. In the current study, we replicated the previous experiment using trained aircrew (n=32). Aircrew demonstrated higher accuracy in the monocular conditions relative to inexperienced observers, though the benefit of binocular viewing was still observed. Interestingly, when monocular information was unreliable judgements made by aircrew were less precise than conditions with reliable monocular cues. Overall, the presence of stereopsis improved accuracy of relative distance judgements in low hover for inexperienced observers and trained aircrew. Despite higher accuracy in monocular conditions for aircrew, their estimates were affected by unreliable monocular information while inexperienced observers were not. These results highlight the impact of task-specific training on the accuracy of depth judgements in rotary wing flight operations. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Subjective evaluation of image quality.\n \n \n \n\n\n \n Sudhama, A., Deas, L. M., Goel, J, Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World, pages 33. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Aishwarya-Sudhama:2017jk,\n\tabstract = {Advances in high-dynamic range, wide-colour-gamut displays have created unparalleled opportunities for improving image quality, but have also driven rapid expansion of data bandwidth requirements. To meet these needs, there is increasing demand for low-impairment display stream compression (DSC). The goal of low-impairment DSC is to ensure that the final product meets demanding compression targets, while being perceptually identical to the original image. Objective approaches, based on error metrics, are useful to a point, but cannot reliably predict the visibility of artefacts near the limits of detection.  Thus, subjective assessments are required to confirm that compression is visually lossless, a task that is is made more complex by the fact that the benchmarks (e.g., what is visually lossless) are not well defined, and by a lack of theory linking these perceptual outcomes to objective error metrics. Subjective quality measures can be dramatically affected by choice of methodology, content and participant experience. Here we will discuss this issue in the context of our recent experiments in which we evaluated leading low impairment algorithms using a common image set, and a side-by-side flicker detection paradigm (ISO/IEC 29170-2).  In follow-up trials we evaluated these same codecs using a modified motion-based paradigm and show that in this more realistic viewing scenario, viewers are often less sensitive to compression-related artefacts.\n\n\n},\n\tannote = {Toronto June 2017},\n\tauthor = {Sudhama, A. and Deas, L. M. and Goel, J and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-06-08 17:40:28 +0000},\n\tdate-modified = {2017-09-03 18:24:20 +0000},\n\tkeywords = {Image Quality},\n\tpages = {33},\n\ttitle = {Subjective evaluation of image quality},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n Advances in high-dynamic range, wide-colour-gamut displays have created unparalleled opportunities for improving image quality, but have also driven rapid expansion of data bandwidth requirements. To meet these needs, there is increasing demand for low-impairment display stream compression (DSC). The goal of low-impairment DSC is to ensure that the final product meets demanding compression targets, while being perceptually identical to the original image. Objective approaches, based on error metrics, are useful to a point, but cannot reliably predict the visibility of artefacts near the limits of detection. Thus, subjective assessments are required to confirm that compression is visually lossless, a task that is is made more complex by the fact that the benchmarks (e.g., what is visually lossless) are not well defined, and by a lack of theory linking these perceptual outcomes to objective error metrics. Subjective quality measures can be dramatically affected by choice of methodology, content and participant experience. Here we will discuss this issue in the context of our recent experiments in which we evaluated leading low impairment algorithms using a common image set, and a side-by-side flicker detection paradigm (ISO/IEC 29170-2). In follow-up trials we evaluated these same codecs using a modified motion-based paradigm and show that in this more realistic viewing scenario, viewers are often less sensitive to compression-related artefacts. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Tolerance of Latency in Controlling a Quadcopter using a Head Mounted Display.\n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World, pages 39. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zhao:2017aa,\n\tannote = {Toronto June 2017},\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-06-08 17:40:28 +0000},\n\tdate-modified = {2017-09-03 18:25:26 +0000},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {39},\n\ttitle = {Tolerance of Latency in Controlling a Quadcopter using a Head Mounted Display},\n\tyear = {2017}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perception in Stereoscopic 3D Media.\n \n \n \n\n\n \n Allison, R. S.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Vision in the Real World, pages 11. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:yu,\n\tabstract = {State-of-the-art stereoscopic displays and virtual reality systems offer the promise of new immersive experiences. They also pose significant perceptual human factors challenges. We have been studying the sensitivity and tolerance of viewers to the key parameters content makers use to produce stereoscopic 3D media. These parameters potentially affect both the perception of depth in a 3D scene, and our sense of motion through it. I will review progress toward understanding when and how these artistic decisions impact a viewer's perceptual experience.},\n\tauthor = {Allison, R. S.},\n\tbooktitle = {Centre for Vision Research International Conference on Vision in the Real World},\n\tdate-added = {2017-06-08 17:40:28 +0000},\n\tdate-modified = {2017-06-08 17:40:28 +0000},\n\tkeywords = {Stereopsis},\n\tpages = {11},\n\ttitle = {Perception in Stereoscopic 3D Media},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n State-of-the-art stereoscopic displays and virtual reality systems offer the promise of new immersive experiences. They also pose significant perceptual human factors challenges. We have been studying the sensitivity and tolerance of viewers to the key parameters content makers use to produce stereoscopic 3D media. These parameters potentially affect both the perception of depth in a 3D scene, and our sense of motion through it. I will review progress toward understanding when and how these artistic decisions impact a viewer's perceptual experience.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Subjective assessment and the criteria for visually lossless compression (Invited).\n \n \n \n\n\n \n Wilcox, L. M., Allison, R. S., & Goel, J.\n\n\n \n\n\n\n In Electronic Imaging: Human Vision and Electronic Imaging panel presentation, pages HVEI-129. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Wilcox:aa,\n\tauthor = {Wilcox, L. M. and Allison, R. S. and Goel, J.},\n\tbooktitle = {Electronic Imaging: Human Vision and Electronic Imaging panel presentation},\n\tdate-added = {2017-04-14 15:16:12 +0000},\n\tdate-modified = {2017-04-14 15:16:12 +0000},\n\tkeywords = {Image Quality},\n\tpages = {HVEI-129},\n\ttitle = {Subjective assessment and the criteria for visually lossless compression (Invited)},\n\tyear = {2017}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Real-time head gesture recognition on head-mounted displays using cascaded hidden Markov models.\n \n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 2361–2366, 10 2017. \n \n\n\n\n
\n\n\n\n \n \n Paper\n  \n \n \n -1\n  \n \n \n -2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{zhao_real-time_2017,\n\tabstract = {Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments have been rarely investigated. In the current study, we present an approach for real-time head gesture recognition on head-mounted displays using Cascaded Hidden Markov Models. We conducted two experiments to evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden Markov Models and assessed the offline classification performance using collected head motion data. In experiment 2, we characterized the real-time performance of the approach by estimating the latency to recognize a head gesture with recorded real-time classification data. Our results show that the proposed approach is effective in recognizing head gestures. The method can be integrated into a virtual reality system as a head gesture interface for interacting with virtual worlds.},\n\tannote = {Oct 5-8, 2017 Banff, Canada},\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tbooktitle = {2017 {IEEE} {International} {Conference} on {Systems}, {Man}, and {Cybernetics} ({SMC})},\n\tdate-added = {2019-02-03 08:23:21 -0500},\n\tdate-modified = {2019-04-13 16:09:18 -0400},\n\tdoi = {10.1109/SMC.2017.8122975},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = 10,\n\tpages = {2361--2366},\n\ttitle = {Real-time head gesture recognition on head-mounted displays using cascaded hidden {Markov} models},\n\turl = {https://arxiv.org/abs/1707.06691},\n\turl-1 = {https://doi.org/10.1109/SMC.2017.8122975},\n\tyear = {2017},\n\turl-1 = {https://arxiv.org/abs/1707.06691},\n\turl-2 = {https://doi.org/10.1109/SMC.2017.8122975}}\n\n
\n
\n\n\n
\n Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments have been rarely investigated. In the current study, we present an approach for real-time head gesture recognition on head-mounted displays using Cascaded Hidden Markov Models. We conducted two experiments to evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden Markov Models and assessed the offline classification performance using collected head motion data. In experiment 2, we characterized the real-time performance of the approach by estimating the latency to recognize a head gesture with recorded real-time classification data. Our results show that the proposed approach is effective in recognizing head gestures. The method can be integrated into a virtual reality system as a head gesture interface for interacting with virtual worlds.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Self-motion perception facilities at York University.\n \n \n \n\n\n \n Allison, R. S., Harris, L. R., & Jenkin, M. R. M.\n\n\n \n\n\n\n In Fechner Day 2017 Conference Proceedings, The 33rd Annual Meeting of the International Society for Psychophysics, 22-26 October 2017, Fukuoka, Japan, volume 33, pages 360-361, 2017. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Allison:ac,\n\tabstract = {\nYork University has a long history of research in the perception of self-motion and orientation using purpose-built apparatus. Recently we developed and installed new facilities including new, more capable versions of Ian Howard's tumbling room and sphere devices: (1) The wide field stereoscopic environment is a projected, computer-generated, virtual environment that completely fills the participant's visual field with edgeless, high-resolution imagery. (2) The new tumbling room allows for full 360 degree rotation of the observer or the visual environment with near perfect visual fidelity. The room walls, floor and ceiling can be removed allowing for locomotion in a cylindrical environment. (3) The sphere environment allows for presenting full-field visual motion displays in pitch, roll or yaw while in a wide range of postures with respect to gravity. This presentation will overview the capabilities and illusions elicited in these devices as well as experiments to cross-validate the devices.\n \n},\n\tannote = {22-26 October 2017, Fukuoka, Japan},\n\tauthor = {Allison, R. S. and Harris, L. R. and Jenkin, M. R. M.},\n\tbooktitle = {Fechner Day 2017 Conference Proceedings, The 33rd Annual Meeting of the International Society for Psychophysics, 22-26 October 2017, Fukuoka, Japan},\n\tdate-added = {2018-09-18 10:10:23 -0400},\n\tdate-modified = {2018-09-18 10:10:23 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {360-361},\n\ttitle = {Self-motion perception facilities at York University},\n\tvolume = {33},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n York University has a long history of research in the perception of self-motion and orientation using purpose-built apparatus. Recently we developed and installed new facilities including new, more capable versions of Ian Howard's tumbling room and sphere devices: (1) The wide field stereoscopic environment is a projected, computer-generated, virtual environment that completely fills the participant's visual field with edgeless, high-resolution imagery. (2) The new tumbling room allows for full 360 degree rotation of the observer or the visual environment with near perfect visual fidelity. The room walls, floor and ceiling can be removed allowing for locomotion in a cylindrical environment. (3) The sphere environment allows for presenting full-field visual motion displays in pitch, roll or yaw while in a wide range of postures with respect to gravity. This presentation will overview the capabilities and illusions elicited in these devices as well as experiments to cross-validate the devices. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Altitude in Stereoscopic-3D Versus 2D Real-world Scenes.\n \n \n \n \n\n\n \n Deas, L. M., Allison, R. S., Hartle, B., Irving, E. L., Glaholt, M., & Wilcox, L. M.\n\n\n \n\n\n\n In IS&T International Symposium on Electronic Imaging 2017, Stereoscopic Displays and Applications XXVIII, volume 2017, pages 41–47, 01 2017. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n \n \"Estimation-1\n  \n \n \n \"Estimation-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{deas_estimation_2017,\n\tabstract = {Research on the role of human stereopsis has largely focused on laboratory studies that control or eliminate other cues to depth. However, in everyday environments we rarely rely on a single source of depth information. Despite this, few studies have assessed the impact of binocular\nvision on depth judgements in real-world scenarios presented in simulation. Here we conducted a series of experiments to determine if, and to what extent, stereoscopic depth provides a benefit for tasks commonly performed by helicopter aircrew. We assessed the impact of binocular vision and\nstereopsis on perception of (1) relative and (2) absolute distance above the ground (altitude) using natural and simulated stereoscopic-3D (S3D) imagery. The results showed that, consistent with the literature, binocular vision provides very weak input to absolute altitude estimates at high\naltitudes (10-100ft). In contrast, estimates of relative altitude at low altitudes (0-5ft) were critically dependent on stereopsis, irrespective of terrain type. These findings are consistent with the view that stereopsis provides important information for altitude judgments when close to\nthe ground; while at high altitudes these judgments are based primarily on the perception of 2D cues.},\n\tannote = {Winner of Best Paper (stereoscopic presentation)\n\nStereoscopic Dsiplays and Applications 2017},\n\tauthor = {Deas, Lesley M. and Allison, Robert S. and Hartle, Brittney and Irving, Elizabeth L. and Glaholt, Mackenzie and Wilcox, Laurie M.},\n\tbooktitle = {{IS\\&T} International Symposium on Electronic Imaging 2017, Stereoscopic Displays and Applications XXVIII},\n\tdate-added = {2017-07-26 18:30:36 +0000},\n\tdate-modified = {2017-11-27 01:29:19 +0000},\n\tdoi = {10.2352/ISSN.2470-1173.2017.5.SD&A-355},\n\tjournal = {Electronic Imaging},\n\tkeywords = {Stereopsis},\n\tmonth = 01,\n\tnumber = {5},\n\tpages = {41--47},\n\ttitle = {Estimation of {Altitude} in {Stereoscopic}-3D {Versus} 2D {Real}-world {Scenes}},\n\turl = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00005},\n\turl-1 = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00005},\n\turl-2 = {http://dx.doi.org/10.2352/ISSN.2470-1173.2017.5.SD&A-355},\n\tvolume = {2017},\n\tyear = {2017},\n\turl-1 = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00005},\n\turl-2 = {https://doi.org/10.2352/ISSN.2470-1173.2017.5.SD&A-355}}\n\n
\n
\n\n\n
\n Research on the role of human stereopsis has largely focused on laboratory studies that control or eliminate other cues to depth. However, in everyday environments we rarely rely on a single source of depth information. Despite this, few studies have assessed the impact of binocular vision on depth judgements in real-world scenarios presented in simulation. Here we conducted a series of experiments to determine if, and to what extent, stereoscopic depth provides a benefit for tasks commonly performed by helicopter aircrew. We assessed the impact of binocular vision and stereopsis on perception of (1) relative and (2) absolute distance above the ground (altitude) using natural and simulated stereoscopic-3D (S3D) imagery. The results showed that, consistent with the literature, binocular vision provides very weak input to absolute altitude estimates at high altitudes (10-100ft). In contrast, estimates of relative altitude at low altitudes (0-5ft) were critically dependent on stereopsis, irrespective of terrain type. These findings are consistent with the view that stereopsis provides important information for altitude judgments when close to the ground; while at high altitudes these judgments are based primarily on the perception of 2D cues.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Paper: Expert Viewers' Preferences for Higher Frame Rate 3D Film.\n \n \n \n \n\n\n \n Allison, R. S., Wilcox, L. M., Anthony, R. C., Helliker, J., & Dunk, B.\n\n\n \n\n\n\n In IS&T International Symposium on Electronic Imaging 2017, Stereoscopic Displays and Applications XXVIII (Reprinted from Journal of Imaging Science and Technology), volume 2017, pages 20–28, 01 2017. \n \n\n\n\n
\n\n\n\n \n \n \"Paper:Paper\n  \n \n \n \"Paper:-1\n  \n \n \n \"Paper:-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison_paper:_2017,\n\tabstract = {Recently the movie industry has been advocating the use of frame rates significantly higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artifacts.\nPreviously we reported that young adult audiences showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle (frame exposure time) on viewers' choices. In the current study we replicated this experiment\nwith an audience composed of imaging professionals who work in the film and display industry who assess image quality as an aspect of their everyday occupation. These viewers were also on average older and thus could be expected to have attachments to the film ``look'' both through experience\nand training. We used stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{\\circ}$ to $358^{\\circ}$, to evaluate viewer preferences. In paired-comparison experiments we assessed preferences along a set of five attributes (realism,\nmotion smoothness, blur/clarity, quality of depth and overall preference). As with the young adults in the earlier study, the expert viewers showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on\nviewers' choices, with the exception of one clip at 48 fps where there was a preference for larger shutter angle. However, this preference was found for the most dynamic ``warrior'' clip in the experts but in the slower moving ``picnic'' clip for the na{\\:i}ve viewers. These data confirm the\nadvantages afforded by high-frame rate capture and presentation in a cinema context in both na{\\"\\i}ve audiences and experienced film professionals. },\n\tannote = {\nStereoscopic Dsiplays and Applications 2017},\n\tauthor = {Allison, Robert S. and Wilcox, Laurie M. and Anthony, Roy C. and Helliker, John and Dunk, Bert},\n\tbooktitle = {{IS\\&T} International Symposium on Electronic Imaging 2017, Stereoscopic Displays and Applications XXVIII (Reprinted from Journal of Imaging Science and Technology)},\n\tdate-added = {2017-07-26 18:30:36 +0000},\n\tdate-modified = {2019-02-03 09:38:13 -0500},\n\tdoi = {10.2352/ISSN.2470-1173.2017.5.SD&A-353},\n\tjournal = {Electronic Imaging (Reprinted from Journal of Imaging Science and Technology)},\n\tkeywords = {Stereopsis},\n\tmonth = 01,\n\tnumber = {5},\n\tpages = {20--28},\n\ttitle = {Paper: {Expert} {Viewers}' {Preferences} for {Higher} {Frame} {Rate} 3D {Film}},\n\turl = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00003},\n\turl-1 = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00003},\n\turl-2 = {http://dx.doi.org/10.2352/ISSN.2470-1173.2017.5.SD&A-353},\n\tvolume = {2017},\n\tyear = {2017},\n\turl-1 = {http://www.ingentaconnect.com/content/ist/ei/2017/00002017/00000005/art00003},\n\turl-2 = {https://doi.org/10.2352/ISSN.2470-1173.2017.5.SD&A-353}}\n\n
\n
\n\n\n
\n Recently the movie industry has been advocating the use of frame rates significantly higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artifacts. Previously we reported that young adult audiences showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle (frame exposure time) on viewers' choices. In the current study we replicated this experiment with an audience composed of imaging professionals who work in the film and display industry who assess image quality as an aspect of their everyday occupation. These viewers were also on average older and thus could be expected to have attachments to the film ``look'' both through experience and training. We used stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{∘}$ to $358^{∘}$, to evaluate viewer preferences. In paired-comparison experiments we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). As with the young adults in the earlier study, the expert viewers showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one clip at 48 fps where there was a preference for larger shutter angle. However, this preference was found for the most dynamic ``warrior'' clip in the experts but in the slower moving ``picnic'' clip for the na\\:ive viewers. These data confirm the advantages afforded by high-frame rate capture and presentation in a cinema context in both naı̈ve audiences and experienced film professionals. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Industry and business perspectives on the distinctions between visually lossless and lossy video quality: Mobile and large format displays.\n \n \n \n \n\n\n \n Brunnström, K., Allison, R. S., Chandler, D. M., Colett, H., Corriveau, P., Daly, S., Goel, J., Knopf, J., Wilcox, L. M., Yaacob, Y., Yang, S., & Zhang, Y.\n\n\n \n\n\n\n In IS&T International Symposium on Electronic Imaging 2017, Human Vision and Electronic Imaging 2017, volume 2017, pages 118–133, 01 2017. \n \n\n\n\n
\n\n\n\n \n \n \"IndustryPaper\n  \n \n \n \"Industry-1\n  \n \n \n \"Industry-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{brunnstrom_industry_2017,\n\tauthor = {Brunnstr{\\"o}m, K. and Allison, R. S. and Chandler, D. M. and Colett, H. and Corriveau, P. and Daly, S. and Goel, J. and Knopf, J. and Wilcox, L. M. and Yaacob, Y. and Yang, S.-N. and Zhang, Y.},\n\tbooktitle = {{IS\\&T} International Symposium on Electronic Imaging 2017, Human Vision and Electronic Imaging 2017},\n\tdate-added = {2017-07-26 18:30:36 +0000},\n\tdate-modified = {2019-02-03 09:03:20 -0500},\n\tdoi = {10.2352/ISSN.2470-1173.2017.14.HVEI-131},\n\tissn = {2470-1173},\n\tjournal = {Electronic Imaging},\n\tkeywords = {Image Quality},\n\tlanguage = {en},\n\tmonth = 01,\n\tnumber = {14},\n\tpages = {118--133},\n\ttitle = {Industry and business perspectives on the distinctions between visually lossless and lossy video quality: {Mobile} and large format displays},\n\turl = {http://www.ingentaconnect.com/content/10.2352/ISSN.2470-1173.2017.14.HVEI-131},\n\turl-1 = {http://www.ingentaconnect.com/content/10.2352/ISSN.2470-1173.2017.14.HVEI-131},\n\turl-2 = {http://dx.doi.org/10.2352/ISSN.2470-1173.2017.14.HVEI-131},\n\turldate = {2017-07-26},\n\tvolume = {2017},\n\tyear = {2017},\n\turl-1 = {http://www.ingentaconnect.com/content/10.2352/ISSN.2470-1173.2017.14.HVEI-131},\n\turl-2 = {https://doi.org/10.2352/ISSN.2470-1173.2017.14.HVEI-131}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Large Scale Subjective Evaluation of Display Stream Compression.\n \n \n \n \n\n\n \n Allison, R. S., Wilcox, L. M., Wang, W., Hoffman, D. M., Hou, Y., Goel, J., Deas, L., & Stolitzka, D.\n\n\n \n\n\n\n In SID Digest of Technical Papers, volume 48 (1), pages 1101–1104, 2017. \n \n\n\n\n
\n\n\n\n \n \n \"Large-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Allison:ab,\n\tabstract = {VESA Display Stream Compression (DSC) is a light-weight codec designed for visually lossless compression over display links. Such high-performance algorithms must be evaluated subjectively to assess whether the codec meets visually lossless criteria. Here we present the first large-scale evaluation of DSC1.2 according to ISO/IEC 29170-2.},\n\tannote = {SID Meeting LA },\n\tauthor = {Allison, R. S. and Wilcox, L. M. and Wang, W. and Hoffman, D. M. and Hou, Y. and Goel, J. and Deas, L. and Stolitzka, D.},\n\tbooktitle = {SID Digest of Technical Papers},\n\tdate-added = {2017-04-14 15:16:12 +0000},\n\tdate-modified = {2017-06-13 11:48:47 +0000},\n\tdoi = {10.1002/sdtp.11838},\n\tkeywords = {Image Quality},\n\tpages = {1101--1104},\n\ttitle = {Large Scale Subjective Evaluation of Display Stream Compression},\n\turl-1 = {http://dx.doi.org/10.1002/sdtp.11838},\n\tvolume = {48 (1)},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1002/sdtp.11838}}\n\n
\n
\n\n\n
\n VESA Display Stream Compression (DSC) is a light-weight codec designed for visually lossless compression over display links. Such high-performance algorithms must be evaluated subjectively to assess whether the codec meets visually lossless criteria. Here we present the first large-scale evaluation of DSC1.2 according to ISO/IEC 29170-2.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating the Motion-to-Photon Latency in Head Mounted Displays.\n \n \n \n \n\n\n \n Zhao, J., Allison, R. S., Vinnikov, M., & Jennings, S.\n\n\n \n\n\n\n In IEEE Virtual Reality 2017, pages 313-314, 2017. \n \n\n\n\n
\n\n\n\n \n \n \"Estimating-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Zhao:aa,\n\tabstract = {We present a method for estimating the Motion-to-Photon (End-to-End) latency of head mounted displays (HMDs). The specific HMD evaluated in our study was the Oculus Rift DK2, but the procedure is general. We mounted the HMD on a pendulum to introduce damped sinusoidal motion to the HMD during the pendulum swing. The latency was estimated by calculating the phase shift between the captured signals of the physical motion of the HMD and a motion-dependent gradient stimulus rendered on the display. We used the proposed method to estimate both rotational and translational Motion-to-Photon latencies of the Oculus Rift DK2.},\n\tannote = {18-22 March 2017 Los Angeles},\n\tauthor = {Zhao, J. and Allison, R. S. and Vinnikov, M. and Jennings, S.},\n\tbooktitle = {IEEE Virtual Reality 2017},\n\tdate-added = {2017-04-14 14:54:33 +0000},\n\tdate-modified = {2017-04-14 14:54:33 +0000},\n\tdoi = {10.1109/VR.2017.7892302},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {313-314},\n\ttitle = {Estimating the Motion-to-Photon Latency in Head Mounted Displays},\n\turl-1 = {http://dx.doi.org/10.1109/VR.2017.7892302},\n\tyear = {2017},\n\turl-1 = {https://doi.org/10.1109/VR.2017.7892302}}\n\n
\n
\n\n\n
\n We present a method for estimating the Motion-to-Photon (End-to-End) latency of head mounted displays (HMDs). The specific HMD evaluated in our study was the Oculus Rift DK2, but the procedure is general. We mounted the HMD on a pendulum to introduce damped sinusoidal motion to the HMD during the pendulum swing. The latency was estimated by calculating the phase shift between the captured signals of the physical motion of the HMD and a motion-dependent gradient stimulus rendered on the display. We used the proposed method to estimate both rotational and translational Motion-to-Photon latencies of the Oculus Rift DK2.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The effect of training on the use of binocular depth cues in low hover depth estimation.\n \n \n \n\n\n \n Hartle, B., Allison, R. S., Irving, E. L., & Wilcox, L. M.\n\n\n \n\n\n\n Technical Report PWGSC Contract Number: W7714-145967, CIMVHR Contract Report, 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Hartle:2017pb,\n\tabstract = {While there is a long history of research in the contribution of binocular vision and stereoscopic depth perception to flight-based tasks, there is no consensus on its operational relevance. Evidence of such operational relevance is required to determine whether stereoscopic vision should be a requirement for Canadian Air Forces (CAF) aircrew, or if and when waivers can safely be permitted. In the experiments reported herein we examined the contribution of binocular vision to a simulated low hover helicopter flight task in which observers were asked to judge to relative distance between a virtual helicopter skid and the ground plane. Four terrain types were used, and observers were asked to make relative depth judgements monocularly and binocularly. In the first study a group of na{\\"\\i}ve observers was tested, and in the second experiment we tested a group of experienced aircrew. Our results show that the presence of stereopsis improves the accuracy of relative altitude judgements for low altitudes (below 5 feet) that are typical of low hover flight operations. Under monocular viewing conditions depth judgements were significantly less accurate. This pattern of results was seen in both experiments, with na{\\"\\i}ve undergraduate and trained aircrew. However, we found that the depth estimates of aircrew were more accurate than those of na{\\"\\i}ve observers under monocular viewing conditions, a result that may reflect situation-specific training during operational maneuvers.  From an operational perspective, these results highlight the potential importance of binocular vision in performing low-hover tasks, and the impact of training on the use of specific depth cues.  },\n\tauthor = {Hartle, Brittney and Allison, Robert S. and Irving, Elizabeth L. and Wilcox, Laurie M.},\n\tdate-added = {2019-03-08 16:43:56 -0500},\n\tdate-modified = {2019-03-08 16:51:24 -0500},\n\tinstitution = {CIMVHR Contract Report},\n\tkeywords = {Stereopsis},\n\tnumber = {PWGSC Contract Number: W7714-145967},\n\ttitle = {The effect of training on the use of binocular depth cues in low hover depth estimation},\n\tyear = {2017}}\n\n
\n
\n\n\n
\n While there is a long history of research in the contribution of binocular vision and stereoscopic depth perception to flight-based tasks, there is no consensus on its operational relevance. Evidence of such operational relevance is required to determine whether stereoscopic vision should be a requirement for Canadian Air Forces (CAF) aircrew, or if and when waivers can safely be permitted. In the experiments reported herein we examined the contribution of binocular vision to a simulated low hover helicopter flight task in which observers were asked to judge to relative distance between a virtual helicopter skid and the ground plane. Four terrain types were used, and observers were asked to make relative depth judgements monocularly and binocularly. In the first study a group of naı̈ve observers was tested, and in the second experiment we tested a group of experienced aircrew. Our results show that the presence of stereopsis improves the accuracy of relative altitude judgements for low altitudes (below 5 feet) that are typical of low hover flight operations. Under monocular viewing conditions depth judgements were significantly less accurate. This pattern of results was seen in both experiments, with naı̈ve undergraduate and trained aircrew. However, we found that the depth estimates of aircrew were more accurate than those of naı̈ve observers under monocular viewing conditions, a result that may reflect situation-specific training during operational maneuvers. From an operational perspective, these results highlight the potential importance of binocular vision in performing low-hover tasks, and the impact of training on the use of specific depth cues. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Expert Viewers' Preferences for Higher Frame Rate 3D Film.\n \n \n \n \n\n\n \n Allison, R. S., Wilcox, L. M., Anthony, R. C., Helliker, J, & Dunk, A.\n\n\n \n\n\n\n Journal of Imaging Science and Technology (Also presented at IS&T Stereoscopic Displays and Applications), 60(6): 60402.1-60402.9. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Expert-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:zp,\n\tabstract = {Recently the movie industry has been advocating the use of frame rates significantly higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artifacts. Previously we reported that young adult audiences showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle (frame exposure time) on viewers' choices. In the current study we replicated this experiment with an audience composed of imaging professionals who work in the film and display industry who assess image quality as an aspect of their everyday occupation. These viewers were also on average older and thus could be expected to have attachments to the ``film look'' both through experience and training. We used stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from 90\\degree to 358\\degree , to evaluate viewer preferences. In paired-comparison experiments we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). As with the young adults in the earlier study, the expert viewers showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one clip at 48 fps where there was a preference for larger shutter angle. However, this preference was found for the most dynamic ``warrior'' clip in the experts but in the slower moving ``picnic'' clip for the na\\''ive viewers. These data confirm the advantages afforded by high-frame rate capture and presentation in a cinema context in both na{\\"\\i}ve audiences and experienced film professionals. },\n\tauthor = {Allison, R. S. and Wilcox, L. M. and Anthony, R. C. and Helliker, J and Dunk, A.},\n\tdate-added = {2016-09-12 20:15:43 +0000},\n\tdate-modified = {2019-02-03 09:04:09 -0500},\n\tdoi = {10.2352/J.ImagingSci.Technol.2016.60.6.060402},\n\tjournal = {Journal of Imaging Science and Technology (Also presented at {IS\\&T} Stereoscopic Displays and Applications)},\n\tkeywords = {Stereopsis},\n\tnumber = {6},\n\tpages = {60402.1-60402.9},\n\ttitle = {Expert Viewers' Preferences for Higher Frame Rate 3D Film},\n\turl-1 = {http://dx.doi.org/10.2352/J.ImagingSci.Technol.2016.60.6.060402},\n\tvolume = {60},\n\tyear = {2016},\n\turl-1 = {https://doi.org/10.2352/J.ImagingSci.Technol.2016.60.6.060402}}\n\n
\n
\n\n\n
\n Recently the movie industry has been advocating the use of frame rates significantly higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artifacts. Previously we reported that young adult audiences showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle (frame exposure time) on viewers' choices. In the current study we replicated this experiment with an audience composed of imaging professionals who work in the film and display industry who assess image quality as an aspect of their everyday occupation. These viewers were also on average older and thus could be expected to have attachments to the ``film look'' both through experience and training. We used stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from 90\\degree to 358\\degree , to evaluate viewer preferences. In paired-comparison experiments we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). As with the young adults in the earlier study, the expert viewers showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one clip at 48 fps where there was a preference for larger shutter angle. However, this preference was found for the most dynamic ``warrior'' clip in the experts but in the slower moving ``picnic'' clip for the na'́ive viewers. These data confirm the advantages afforded by high-frame rate capture and presentation in a cinema context in both naı̈ve audiences and experienced film professionals. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Airborne optical and thermal remote sensing for wildfire detection and monitoring.\n \n \n \n \n\n\n \n Allison, R. S., Johnston, J. M., Craig, G., & Jennings, S.\n\n\n \n\n\n\n Sensors, 16(8): 1310.1-1310.29. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Airborne-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2016aa,\n\tauthor = {Allison, R. S. and Johnston, J. M. and Craig, G. and Jennings, S.},\n\tdate-added = {2016-08-16 23:24:18 +0000},\n\tdate-modified = {2018-11-25 14:23:17 -0500},\n\tdoi = {10.3390/s16081310},\n\tjournal = {Sensors},\n\tkeywords = {Misc.},\n\tnumber = {8},\n\tpages = {1310.1-1310.29},\n\ttitle = {Airborne optical and thermal remote sensing for wildfire detection and monitoring},\n\turl-1 = {http://dx.doi.org/10.3390/s16081310},\n\tvolume = {16},\n\tyear = {2016},\n\turl-1 = {https://doi.org/10.3390/s16081310}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Size matters: Perceived depth magnitude varies with stimulus height.\n \n \n \n \n\n\n \n Tsirlin, I, Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Vision Research, 123: 41-45. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Size-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin:yq,\n\tabstract = {Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus.  To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. },\n\tauthor = {Tsirlin, I and Wilcox, L. M. and Allison, R. S.},\n\tdate-added = {2016-04-09 17:29:11 +0000},\n\tdate-modified = {2016-08-28 17:55:53 +0000},\n\tdoi = {10.1016/j.visres.2016.04.006},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tpages = {41-45},\n\ttitle = {Size matters: Perceived depth magnitude varies with stimulus height},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2016.04.006},\n\tvolume = {123},\n\tyear = {2016},\n\turl-1 = {https://doi.org/10.1016/j.visres.2016.04.006}}\n\n
\n
\n\n\n
\n Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of Depth of Field Simulation on Visual Fatigue: Who are Impacted? and How?.\n \n \n \n \n\n\n \n Vinnikov, M., Allison, R. S., & Fernandes, S.\n\n\n \n\n\n\n International Journal of Human-Computer Studies, 91: 37-51. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n \n \"Impact-1\n  \n \n \n \"Impact-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Vinnikov:zr,\n\tabstract = {While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all.},\n\tauthor = {Vinnikov, M. and Allison, R. S. and Fernandes, S.},\n\tdate-added = {2016-03-01 23:27:46 +0000},\n\tdate-modified = {2016-04-26 14:03:59 +0000},\n\tdoi = {10.1016/j.ijhcs.2016.03.001},\n\tjournal = {International Journal of Human-Computer Studies},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {37-51},\n\ttitle = {Impact of Depth of Field Simulation on Visual Fatigue: Who are Impacted? and How?},\n\turl = {http://percept.eecs.yorku.ca/papers/vinnikov dof.pdf},\n\turl-1 = {http://dx.doi.org/10.1016/j.ijhcs.2016.03.001},\n\tvolume = {91},\n\tyear = {2016},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/vinnikov%20dof.pdf},\n\turl-2 = {https://doi.org/10.1016/j.ijhcs.2016.03.001}}\n\n
\n
\n\n\n
\n While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The nature and timing of pseudoscopic experiences.\n \n \n \n \n\n\n \n Palmisano, S. A., Hill, H., & Allison, R. S.\n\n\n \n\n\n\n i-Perception, 7(1): Article 2041669515625793, 1-24. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:rz,\n\tauthor = {Palmisano, S. A. and Hill, H. and Allison, R. S.},\n\tdate-added = {2015-12-05 15:24:00 +0000},\n\tdate-modified = {2018-11-25 14:26:07 -0500},\n\tdoi = {10.1177/2041669515625793},\n\tjournal = {i-Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {1},\n\tpages = {Article 2041669515625793, 1-24},\n\ttitle = {The nature and timing of pseudoscopic experiences},\n\turl = {http://ipe.sagepub.com/content/7/1/2041669515625793.full},\n\turl-1 = {http://ipe.sagepub.com/content/7/1/2041669515625793.full},\n\turl-2 = {http://dx.doi.org/10.1177/2041669515625793},\n\tvolume = {7},\n\tyear = {2016},\n\turl-1 = {http://ipe.sagepub.com/content/7/1/2041669515625793.full},\n\turl-2 = {https://doi.org/10.1177/2041669515625793}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accommodation and pupil responses to random-dot stereograms.\n \n \n \n \n\n\n \n Suryakumar, R., & Allison, R.\n\n\n \n\n\n\n Journal of Optometry, 9(1): 40-46. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"AccommodationPaper\n  \n \n \n \"Accommodation-1\n  \n \n \n \"Accommodation-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Suryakumar:2020yq,\n\tauthor = {Suryakumar, R. and Allison, R.S.},\n\tdate-added = {2015-03-05 17:38:12 +0000},\n\tdate-modified = {2016-04-26 14:05:55 +0000},\n\tdoi = {10.1016/j.optom.2015.03.002},\n\tjournal = {Journal of Optometry},\n\tkeywords = {Eye Movements & Tracking, Stereopsis},\n\tnumber = {1},\n\tpages = {40-46},\n\ttitle = {Accommodation and pupil responses to random-dot stereograms.},\n\turl = {http://percept.eecs.yorku.ca/papers/suryakumar 2015.pdf},\n\turl-1 = {http://dx.doi.org/10.1016/j.optom.2015.03.002},\n\tvolume = {9},\n\tyear = {2016},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/suryakumar%202015.pdf},\n\turl-2 = {https://doi.org/10.1016/j.optom.2015.03.002}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Mobility Assessment Tool (MAT), Computer Vision for A Purely Objective Gait-Balance Test.\n \n \n \n\n\n \n Bunn, F., Allison, R. S., Sergio, L., Gorbet, D., Bunn, S., & Zhao, J.\n\n\n \n\n\n\n In Falls & Mobility Network Meeting 2016 - Research and Innovative Clinical Practices.. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bunn:2016aa,\n\tabstract = {The traditionally subjective mobility gait and balance test -- the Tinetti -- is now an objective computer measurement. The Tinetti is a standard test for determining the risk of falling.  With the help of York University (Computer Science \\& Engineering and Health/Kinesiology), and support of the NSERC Engage program, the Mobility Assessment Tool, MAT, was developed as a non-invasive, reproducible, reliable test based on a modified Tinetti test. MAT uses the analysis of a three minute video of a subject sitting, standing up, sitting back down, walking a few paces, and turning in a circle and on the spot.\n\nThe MAT analysis software runs on an off the shelf laptop computer to analyze the video taken with a standard Microsoft Kinect dual channel camera.  Built into the camera is the separation of the moving subject from the background.  It also overlays a twenty two point ``skeleton'' representing the movement of the skeleton-points of the subject.  The analysis takes a few seconds and produces a measurement of thirty two different parameters of the subject's movement which is depicted by the skeleton points. Twenty two of these parameters are used to calculate the Tinetti score for the risk of falling (low, moderate, or high, risk).  The discussion will focus on the simplicity and ease of use of the MAT as a diagnostic and tracking tool. Applications of the MAT include: 1) tracking the rehabilitation milestones for concussion patients, 2) monitoring the rehabilitation of stroke patients, 3) tracking the stabilization or deterioration of Alzheimer's patients.},\n\tannote = {Toronto\n\nFalls \\& Mobility Network Meeting Nov. 21, 2016},\n\tauthor = {Bunn, F. and Allison, R. S. and Sergio, L. and Gorbet, D. and Bunn, S. and Zhao, J.},\n\tbooktitle = {Falls \\& Mobility Network Meeting 2016 - Research and Innovative Clinical Practices.},\n\tdate-added = {2016-12-04 21:58:08 +0000},\n\tdate-modified = {2016-12-04 22:12:07 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Mobility Assessment Tool ({MAT}), Computer Vision for A Purely Objective Gait-Balance Test.},\n\tyear = {2016}}\n\n
\n
\n\n\n
\n The traditionally subjective mobility gait and balance test – the Tinetti – is now an objective computer measurement. The Tinetti is a standard test for determining the risk of falling. With the help of York University (Computer Science & Engineering and Health/Kinesiology), and support of the NSERC Engage program, the Mobility Assessment Tool, MAT, was developed as a non-invasive, reproducible, reliable test based on a modified Tinetti test. MAT uses the analysis of a three minute video of a subject sitting, standing up, sitting back down, walking a few paces, and turning in a circle and on the spot. The MAT analysis software runs on an off the shelf laptop computer to analyze the video taken with a standard Microsoft Kinect dual channel camera. Built into the camera is the separation of the moving subject from the background. It also overlays a twenty two point ``skeleton'' representing the movement of the skeleton-points of the subject. The analysis takes a few seconds and produces a measurement of thirty two different parameters of the subject's movement which is depicted by the skeleton points. Twenty two of these parameters are used to calculate the Tinetti score for the risk of falling (low, moderate, or high, risk). The discussion will focus on the simplicity and ease of use of the MAT as a diagnostic and tracking tool. Applications of the MAT include: 1) tracking the rehabilitation milestones for concussion patients, 2) monitoring the rehabilitation of stroke patients, 3) tracking the stabilization or deterioration of Alzheimer's patients.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Visual perception of transparent objects in real and virtual world.\n \n \n \n\n\n \n Sultana, A., & Allison, R. S.\n\n\n \n\n\n\n In IRTG Workshop, Frankfurt Germany July 11-15, 2016. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sultana:2016fj,\n\tannote = {Frankfurt Germany July 11-15, 2016},\n\tauthor = {Sultana, A. and Allison, R. S.},\n\tbooktitle = {IRTG Workshop, Frankfurt Germany July 11-15, 2016},\n\tdate-added = {2016-12-04 21:56:23 +0000},\n\tdate-modified = {2016-12-04 22:09:28 +0000},\n\tkeywords = {Stereopsis},\n\ttitle = {Visual perception of transparent objects in real and virtual world},\n\tyear = {2016}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Shape perception of water in photo-realistic 3D images.\n \n \n \n\n\n \n Sultana, A., & Allison, R. S.\n\n\n \n\n\n\n In 12th International Conference on Light and Color in Nature. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sultana:2016aa,\n\tabstract = {Light plays an extremely important role in the perception of transparency, depth and shape of water. In computer graphics, an important question is: How does perception of transparent objects (including 3D shape reconstruction) depend on the fidelity of the rendering. In this paper, I present a theoretical method to recover the surface shape of water under day light settings based on human visual stereoscopic view. Flat transparent objects with arbitrary depth should pass all the incoming light and we should obtain a clear view of the background through them. However, when the surface is not flat, the image of the background or the underlying surface gets distorted because the chromatic light passing through different regions of water surface experience distortion and absorption that varies with wavelength. The reflection and refraction angles of light hitting the surface of a material depend on the direction of the light, its spectral composition, the medium and its surface shape. In this study, we evaluate and improve the cues available for perceiving shape of refractive objects by exploring the relationship in a 3D view between (a) Reflective highlights on the water surface that depends largely on the lightning conditions and refractive features (with known index of refraction) seen through the surface of the medium, (b) viewing and perceiving conditions (stereoscopic and/or non stereoscopic) and (c) textures and shading that provide cues to distortions.\nAnalysis to date predicts that humans should have information to identify and reconstruct shape of an object in refractive stereo given that:\ni. Object is transparent and visible ii. Scene redirects incoming light just once and index of refraction is known.\niii. Surface is both optically smooth and textured iv. At least two viewpoints should be available to obtain one 3D point on its light path at viewing surface.\nPsychophysical experiments that we are undertaking will either confirm or falsify human perceptual capability to identify and reconstruct shape of object under the above conditions.},\n\tannote = {31 May - 3 June 2016 at the University of Granada, Spain},\n\tauthor = {Sultana, A. and Allison, R. S.},\n\tbooktitle = {12th International Conference on Light and Color in Nature},\n\tdate-added = {2016-12-04 21:56:23 +0000},\n\tdate-modified = {2016-12-04 21:57:13 +0000},\n\tkeywords = {Stereopsis},\n\ttitle = {Shape perception of water in photo-realistic 3D images},\n\tyear = {2016}}\n\n
\n
\n\n\n
\n Light plays an extremely important role in the perception of transparency, depth and shape of water. In computer graphics, an important question is: How does perception of transparent objects (including 3D shape reconstruction) depend on the fidelity of the rendering. In this paper, I present a theoretical method to recover the surface shape of water under day light settings based on human visual stereoscopic view. Flat transparent objects with arbitrary depth should pass all the incoming light and we should obtain a clear view of the background through them. However, when the surface is not flat, the image of the background or the underlying surface gets distorted because the chromatic light passing through different regions of water surface experience distortion and absorption that varies with wavelength. The reflection and refraction angles of light hitting the surface of a material depend on the direction of the light, its spectral composition, the medium and its surface shape. In this study, we evaluate and improve the cues available for perceiving shape of refractive objects by exploring the relationship in a 3D view between (a) Reflective highlights on the water surface that depends largely on the lightning conditions and refractive features (with known index of refraction) seen through the surface of the medium, (b) viewing and perceiving conditions (stereoscopic and/or non stereoscopic) and (c) textures and shading that provide cues to distortions. Analysis to date predicts that humans should have information to identify and reconstruct shape of an object in refractive stereo given that: i. Object is transparent and visible ii. Scene redirects incoming light just once and index of refraction is known. iii. Surface is both optically smooth and textured iv. At least two viewpoints should be available to obtain one 3D point on its light path at viewing surface. Psychophysical experiments that we are undertaking will either confirm or falsify human perceptual capability to identify and reconstruct shape of object under the above conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Wide Field Immersive Display for the Study of Locomotor Behaviour.\n \n \n \n\n\n \n Zhao, J., & Allison, R. S.\n\n\n \n\n\n\n In CAN CAPnet-CPS Satellite Meeting- Action & Perception: Cognition, Coding and Clinical Populations. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zhao:2016aa,\n\tabstract = {We describe a unique new wide-field immersive stereoscopic environment (WISE) that can render real-time, interactive and immersive stereoscopic simulations. This recently constructed virtual environment allows for presenting seamless, high-resolution, high-contrast, binocular photorealistic renderings of challenging environments over the entire visual field. The display can present complex terrain with naturalistic texture and is equipped with motion tracking as well as linear and (soon) rotary treadmills (or the subject can be seated). Based on these capabilities we can present variations in simulated walking surfaces, potential obstacles, or interception targets and determine how these influence locomotor behaviour. We will describe experiments assessing the utility of mechanical repositioning interfaces as a locomotion interface.},\n\tannote = {Sunday May 29th, 2016 Toronto},\n\tauthor = {Zhao, J. and Allison, R. S.},\n\tbooktitle = {CAN CAPnet-CPS Satellite Meeting- Action \\& Perception: Cognition, Coding and Clinical Populations},\n\tdate-added = {2016-09-12 20:17:26 +0000},\n\tdate-modified = {2016-09-12 20:17:26 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {A Wide Field Immersive Display for the Study of Locomotor Behaviour},\n\tyear = {2016}}\n\n
\n
\n\n\n
\n We describe a unique new wide-field immersive stereoscopic environment (WISE) that can render real-time, interactive and immersive stereoscopic simulations. This recently constructed virtual environment allows for presenting seamless, high-resolution, high-contrast, binocular photorealistic renderings of challenging environments over the entire visual field. The display can present complex terrain with naturalistic texture and is equipped with motion tracking as well as linear and (soon) rotary treadmills (or the subject can be seated). Based on these capabilities we can present variations in simulated walking surfaces, potential obstacles, or interception targets and determine how these influence locomotor behaviour. We will describe experiments assessing the utility of mechanical repositioning interfaces as a locomotion interface.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of frame rate and motion blur on vection.\n \n \n \n \n\n\n \n Fujii, Y., Allison, R. S., Guterman, P., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 16, pages 184. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fujii:2016aa,\n\tabstract = {\nAuthor Affiliations\nJournal of Vision September 2016, Vol.16, 184. doi:10.1167/16.12.184\n\n    Share\n        E-mail\n        Facebook\n        Twitter\n        Google\n        Digg\n        Delicious\n        CiteULike\n        Tumblr\n        StumbleUpon\n},\n\tauthor = {Fujii, Y. and Allison, R. S. and Guterman, P. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2016-09-09 01:10:16 +0000},\n\tdate-modified = {2016-09-09 01:10:16 +0000},\n\tdoi = {10.1167/16.12.184},\n\tkeywords = {Stereopsis},\n\tpages = {184},\n\ttitle = {The effect of frame rate and motion blur on vection},\n\turl-1 = {http://dx.doi.org/10.1167/16.12.184},\n\tvolume = {16},\n\tyear = {2016},\n\turl-1 = {https://doi.org/10.1167/16.12.184}}\n\n
\n
\n\n\n
\n Author Affiliations Journal of Vision September 2016, Vol.16, 184. doi:10.1167/16.12.184 Share E-mail Facebook Twitter Google Digg Delicious CiteULike Tumblr StumbleUpon \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Effects of Depth Warping on Perceived Acceleration in Stereoscopic Animation.\n \n \n \n \n\n\n \n Laldin, S., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In 2016 International Conference on 3D Imaging (IC3D), pages 1-8, 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Laldin:aa,\n\tabstract = {Stereoscopic media produce the sensation of depth through differences between the images presented to the two eyes. These differences arise from binocular parallax which in turn is caused by the separation of the cameras used to capture the scene. Creators of stereoscopic media face the challenge of depicting compelling depth while restricting the amount of parallax to a comfortable range. To address this tradeoff, stereoscopic warping or depth adjustment algorithms are used in the post-production process to selectively increase or decrease the depth in specific regions. This process modifies the image's depth-to-parallax mapping to suit the desired parallax range. As the depth is adjusted using non-linear parallax re-mapping functions, the geometric stereoscopic space is distorted. In addition, the relative expansion or compression of stereoscopic space should theoretically affect the perceived acceleration of an object passing through that region. Here we evaluate this prediction and determine if stereoscopic warping affects viewers' perception of acceleration. Observers judged the perceived acceleration of an approaching object (a toy helicopter) moving in depth through a complex stereoscopic 3D scene. The helicopter flew at one of two altitudes, either ground level or camera level. For each altitude, stereoscopic animations were produced under three depth re-mapping conditions i) compressive, ii) expansive, and iii) zero (no re-mapping) for a total of six test conditions. \n\nWe predicted that expansive depth re-mapping would produce a bias toward perceiving deceleration of the approaching helicopter, while compressive depth re-mapping would result in a bias toward seeing acceleration. However, there was no significant difference in the amount or direction of bias between the re-mapping conditions. We did find a significant effect of the helicopter altitude, such that there was little bias in acceleration judgments when the helicopter moved at ground level but a significant bias towards reporting acceleration when the helicopter moved at camera level. This result is consistent with the proposal that observers can make use of additional monocular (2D) cues in the ground level condition to improve their acceleration estimates. The lack of an effect of depth re-mapping suggests that viewers have considerable tolerance to depth distortions resulting from stereoscopic post-processing. These results have important implications for effective post-production and quality assurance for stereoscopic 3D content creation.\n},\n\tannote = {Leige 13-14 Dec. 2016 },\n\tauthor = {Laldin, S. and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {2016 International Conference on 3D Imaging (IC3D)},\n\tdate-added = {2016-12-04 21:59:19 +0000},\n\tdate-modified = {2017-02-04 15:59:47 +0000},\n\tdoi = {10.1109/IC3D.2016.7823446},\n\tkeywords = {Stereopsis},\n\torganization = {IEEE},\n\tpages = {1-8},\n\ttitle = {The Effects of Depth Warping on Perceived Acceleration in Stereoscopic Animation},\n\turl-1 = {http://dx.doi.org/10.1109/IC3D.2016.7823446},\n\tyear = {2016},\n\turl-1 = {https://doi.org/10.1109/IC3D.2016.7823446}}\n\n
\n
\n\n\n
\n Stereoscopic media produce the sensation of depth through differences between the images presented to the two eyes. These differences arise from binocular parallax which in turn is caused by the separation of the cameras used to capture the scene. Creators of stereoscopic media face the challenge of depicting compelling depth while restricting the amount of parallax to a comfortable range. To address this tradeoff, stereoscopic warping or depth adjustment algorithms are used in the post-production process to selectively increase or decrease the depth in specific regions. This process modifies the image's depth-to-parallax mapping to suit the desired parallax range. As the depth is adjusted using non-linear parallax re-mapping functions, the geometric stereoscopic space is distorted. In addition, the relative expansion or compression of stereoscopic space should theoretically affect the perceived acceleration of an object passing through that region. Here we evaluate this prediction and determine if stereoscopic warping affects viewers' perception of acceleration. Observers judged the perceived acceleration of an approaching object (a toy helicopter) moving in depth through a complex stereoscopic 3D scene. The helicopter flew at one of two altitudes, either ground level or camera level. For each altitude, stereoscopic animations were produced under three depth re-mapping conditions i) compressive, ii) expansive, and iii) zero (no re-mapping) for a total of six test conditions. We predicted that expansive depth re-mapping would produce a bias toward perceiving deceleration of the approaching helicopter, while compressive depth re-mapping would result in a bias toward seeing acceleration. However, there was no significant difference in the amount or direction of bias between the re-mapping conditions. We did find a significant effect of the helicopter altitude, such that there was little bias in acceleration judgments when the helicopter moved at ground level but a significant bias towards reporting acceleration when the helicopter moved at camera level. This result is consistent with the proposal that observers can make use of additional monocular (2D) cues in the ground level condition to improve their acceleration estimates. The lack of an effect of depth re-mapping suggests that viewers have considerable tolerance to depth distortions resulting from stereoscopic post-processing. These results have important implications for effective post-production and quality assurance for stereoscopic 3D content creation. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hard Real-Time General-Purpose Robotic Simulations of Autonomous Air Vehicles.\n \n \n \n \n\n\n \n Walker, S. M., Shan, J., & Allison, R. S.\n\n\n \n\n\n\n In AIAA Modeling and Simulation Technologies Conference, AIAA SciTech, pages 1667.1-1667.12, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HardPaper\n  \n \n \n \"Hard-1\n  \n \n \n \"Hard-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Walker:fr,\n\tabstract = {High-fidelity general-purpose robotic simulators are a special class of simulator designed to simulate all the components of a real-world robotics system, including autonomous air vehicles and planetary exploration rovers, so that a real-world system can be tested and verified before/during deployment on the real-world hardware. General-purpose robotic simulators can simulate sensors, actuators, obstacles, terrains, environments, physics, lighting, fluids, and air particles, while also providing a means to verify the system's autonomous algorithms by using the simulated vehicle in place of the real-world one. General-purpose robotic simulators are typically coupled with an abstract robotic control interface so that autonomous systems evaluated on the simulated vehicles can be deployed, unchanged, on the corresponding real-world vehicles and vice versa. However, the problem with the current technology and research is that neither the robotic simulators nor the robotic control interfaces support Hard Real-Time capabilities, and cannot guarantee that Hard Real-Time constraints will be met. The lack of Hard Real-Time support has major implications on both the utility and the validity of the simulation results and the functioning of the real- world autonomous vehicle. As a solution, this paper will present Hard-RTSim, a novel hard real-time simulation framework that will: 1) Bring Hard Real-Time support to general-purpose robotic simulators; and 2) Bring Hard Real-Time support to abstract robotic control interfaces. Hard-RTSim guarantees that simulated events in the environment or modeled vehicle are produced and handled with finite (bounded) accuracy and precision. Furthermore it improves these temporal responses to ensure these bounds are representative of temporal requirements for a wide range of scenarios. The Hard-RTSim framework ensures that the simulator and the hard real-time processes will actually get to use the CPU when they request/need it, no matter how many other processes are loaded on the CPU. The experimental results of using the Hard-RTSim framework compared to not using it yield a huge improvement in responsiveness and reliability. There is an improvement of 35\\% when the CPU is minimally loaded and then as the CPU load is increased the improvement increases as well, all the way up to a 98\\% improvement when the CPU is loaded at its maximum. These substantial improvements in precision and reliability will help to further the state of space exploration, aerospace technology, and produce better and more reliable autonomous aerial vehicles and planetary exploration rovers. },\n\tannote = { 4 - 8 January 2016 | San Diego, California\nAIAA Science and Technology Forum and Exposition (SciTech 2016)\n\n},\n\tauthor = {Walker, S. M. and Shan, J. and Allison, R. S.},\n\tbooktitle = {AIAA Modeling and Simulation Technologies Conference, AIAA SciTech},\n\tdate-added = {2015-12-05 15:21:38 +0000},\n\tdate-modified = {2016-08-28 18:02:55 +0000},\n\tdoi = {10.2514/6.2016-16},\n\tkeywords = {Misc.},\n\tnumber = {AIAA 2016-1667},\n\tpages = {1667.1-1667.12},\n\ttitle = {Hard Real-Time General-Purpose Robotic Simulations of Autonomous Air Vehicles},\n\turl = {http://percept.eecs.yorku.ca/papers/FINAL SUBMISSION ShawnWalkerHardRTSim.pdf},\n\turl-1 = {http://dx.doi.org/10.2514/6.2016-16},\n\tyear = {2016},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/FINAL%20SUBMISSION%20ShawnWalkerHardRTSim.pdf},\n\turl-2 = {https://doi.org/10.2514/6.2016-16}}\n\n
\n
\n\n\n
\n High-fidelity general-purpose robotic simulators are a special class of simulator designed to simulate all the components of a real-world robotics system, including autonomous air vehicles and planetary exploration rovers, so that a real-world system can be tested and verified before/during deployment on the real-world hardware. General-purpose robotic simulators can simulate sensors, actuators, obstacles, terrains, environments, physics, lighting, fluids, and air particles, while also providing a means to verify the system's autonomous algorithms by using the simulated vehicle in place of the real-world one. General-purpose robotic simulators are typically coupled with an abstract robotic control interface so that autonomous systems evaluated on the simulated vehicles can be deployed, unchanged, on the corresponding real-world vehicles and vice versa. However, the problem with the current technology and research is that neither the robotic simulators nor the robotic control interfaces support Hard Real-Time capabilities, and cannot guarantee that Hard Real-Time constraints will be met. The lack of Hard Real-Time support has major implications on both the utility and the validity of the simulation results and the functioning of the real- world autonomous vehicle. As a solution, this paper will present Hard-RTSim, a novel hard real-time simulation framework that will: 1) Bring Hard Real-Time support to general-purpose robotic simulators; and 2) Bring Hard Real-Time support to abstract robotic control interfaces. Hard-RTSim guarantees that simulated events in the environment or modeled vehicle are produced and handled with finite (bounded) accuracy and precision. Furthermore it improves these temporal responses to ensure these bounds are representative of temporal requirements for a wide range of scenarios. The Hard-RTSim framework ensures that the simulator and the hard real-time processes will actually get to use the CPU when they request/need it, no matter how many other processes are loaded on the CPU. The experimental results of using the Hard-RTSim framework compared to not using it yield a huge improvement in responsiveness and reliability. There is an improvement of 35% when the CPU is minimally loaded and then as the CPU load is increased the improvement increases as well, all the way up to a 98% improvement when the CPU is loaded at its maximum. These substantial improvements in precision and reliability will help to further the state of space exploration, aerospace technology, and produce better and more reliable autonomous aerial vehicles and planetary exploration rovers. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Evidence that viewers prefer higher frame rate film.\n \n \n \n \n\n\n \n Wilcox, L., Allison, R. S., Helliker, J., Dunk, A., & Anthony, R.\n\n\n \n\n\n\n ACM Transactions on Applied Perception (TAP), 12(14): Article 15. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"EvidencePaper\n  \n \n \n \"Evidence-1\n  \n \n \n \"Evidence-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Wilcox:2015tap,\n\tabstract = {High frame rate movie-making refers to the capture and projection of movies at frame rates several times higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artefacts. However, there is considerable debate in the cinema industry regarding the acceptance of HFR content given anecdotal reports of hyper-realistic imagery that reveals too much set and costume detail.  Despite the potential theoretical advantages, there has been little empirical investigation of the impact of high-frame rate techniques on the viewer experience. In this study we use stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{\\circ}$ to $358^{\\circ}$, to evaluate viewer preferences. In a paired-comparison paradigm we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). The resulting data show a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one measure (motion smoothness) for one clip type. These data are the first empirical evidence of the advantages afforded by high frame rate capture and presentation in a cinema context.\n },\n\tannote = {ACM SIGGRAPH Symposium on Applied Perception\n\nSeptember 13-14, 2015 - Tuebingen, Germany\nAt the Max Planck Institute for Biological Cybernetics},\n\tauthor = {Wilcox, L.M. and Allison, R. S. and Helliker, J. and Dunk, A. and Anthony, R.C.},\n\tdate-added = {2015-06-28 11:16:16 +0000},\n\tdate-modified = {2016-01-03 03:24:53 +0000},\n\tdoi = {10.1145/2810039},\n\tjournal = {{ACM} Transactions on Applied Perception ({TAP})},\n\tkeywords = {Stereopsis},\n\tnumber = {14},\n\tpages = {Article 15},\n\ttitle = {Evidence that viewers prefer higher frame rate film},\n\turl = {http://dl.acm.org/citation.cfm?id=2821016.2810039},\n\turl-1 = {http://dl.acm.org/citation.cfm?id=2821016.2810039},\n\turl-2 = {http://dx.doi.org/10.1145/2810039},\n\tvolume = {12},\n\tyear = {2015},\n\turl-1 = {http://dl.acm.org/citation.cfm?id=2821016.2810039},\n\turl-2 = {https://doi.org/10.1145/2810039}}\n\n
\n
\n\n\n
\n High frame rate movie-making refers to the capture and projection of movies at frame rates several times higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artefacts. However, there is considerable debate in the cinema industry regarding the acceptance of HFR content given anecdotal reports of hyper-realistic imagery that reveals too much set and costume detail. Despite the potential theoretical advantages, there has been little empirical investigation of the impact of high-frame rate techniques on the viewer experience. In this study we use stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{∘}$ to $358^{∘}$, to evaluate viewer preferences. In a paired-comparison paradigm we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). The resulting data show a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one measure (motion smoothness) for one clip type. These data are the first empirical evidence of the advantages afforded by high frame rate capture and presentation in a cinema context. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual tolerance to stereoscopic 3D image distortion.\n \n \n \n \n\n\n \n Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n ACM Transactions on Applied Perception, 12(3): Article 10, 1-20. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:ty,\n\tauthor = {Allison, R. S. and Wilcox, L. M.},\n\tdate-added = {2015-05-01 18:47:53 +0000},\n\tdate-modified = {2016-01-03 03:23:35 +0000},\n\tdoi = {10.1145/2770875},\n\tjournal = {{ACM} Transactions on Applied Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {3},\n\tpages = {Article 10, 1-20},\n\ttitle = {Perceptual tolerance to stereoscopic 3D image distortion},\n\turl-1 = {http://dx.doi.org/10.1145/2770875},\n\tvolume = {12},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1145/2770875}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Future Challenges for Vection Research: Definitions, Functional Significance, Measures and Neural Bases.\n \n \n \n \n\n\n \n Palmisano, S. A., Allison, R. S., Schira, M. M., & Barry, R. J.\n\n\n \n\n\n\n Frontiers in Psychology Research, Perception Science, 6: Article 193, 1-15. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Future-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:2015db,\n\tauthor = {Palmisano, S. A. and Allison, R. S. and Schira, M. M. and Barry, R. J.},\n\tdate-added = {2015-02-07 05:10:09 +0000},\n\tdate-modified = {2018-11-25 14:31:04 -0500},\n\tdoi = {10.3389/fpsyg.2015.00193},\n\tjournal = {Frontiers in Psychology Research, Perception Science},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {Article 193, 1-15},\n\ttitle = {Future Challenges for Vection Research: Definitions, Functional Significance, Measures and Neural Bases},\n\turl-1 = {http://dx.doi.org/10.3389/fpsyg.2015.00193},\n\tvolume = {6},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.3389/fpsyg.2015.00193}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of head orientation and scene rigidity on vection.\n \n \n \n\n\n \n Guterman, P., & Allison, R. S.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Perceptual Organization, pages 65. York University, Toronto, June 23-26, 2015 2015.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:nr,\n\tabstract = {Changing head tilt relative to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether visually induced self-motion (vection) is influenced by varying head tilt and optic flow direction with respect to gravity. We previously found that vection was enhanced when upright observers viewed vertical optic flow (i.e., simulating self-motion along the spinal axis) compared to horizontal flow. We hypothesized that if this benefit was due to aligning the visual motion signal with gravity, then inter-aural lamellar flow while laying on the side would provide a similar vection advantage. Observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern simulating self-motion along the spinal or inter-aural axis. Vection magnitude estimates, onset, and duration were recorded. The results showed that aligning the direction of visual motion and gravity enhanced vection in side-laying observers, but when gravity was irrelevant---as in the supine and prone posture---spinal axis motion enhanced vection.However, perceived scene rigidity varied with head orientation (e.g., dots were seen as floating bubbles), so the issue of scene rigidity was examined by comparing vection in two environments: a rigid pipe structure which looked like a complex arrangement of plumbing pipes, and a field of dots. The results of varying head, motion direction, and perceived scene rigidity, will be discussed and may provide insight into whether self-motion perception is determined by a weighted summation of visual and vestibular signals.\n},\n\tauthor = {Guterman, P. and Allison, R. S.},\n\tbooktitle = {Centre for Vision Research International Conference on Perceptual Organization},\n\tdate-added = {2015-06-23 11:38:16 +0000},\n\tdate-modified = {2015-06-23 11:38:16 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {June 23-26, 2015},\n\tpages = {65},\n\tpublisher = {York University, Toronto},\n\ttitle = {Effects of head orientation and scene rigidity on vection},\n\tyear = {2015}}\n\n
\n
\n\n\n
\n Changing head tilt relative to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether visually induced self-motion (vection) is influenced by varying head tilt and optic flow direction with respect to gravity. We previously found that vection was enhanced when upright observers viewed vertical optic flow (i.e., simulating self-motion along the spinal axis) compared to horizontal flow. We hypothesized that if this benefit was due to aligning the visual motion signal with gravity, then inter-aural lamellar flow while laying on the side would provide a similar vection advantage. Observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern simulating self-motion along the spinal or inter-aural axis. Vection magnitude estimates, onset, and duration were recorded. The results showed that aligning the direction of visual motion and gravity enhanced vection in side-laying observers, but when gravity was irrelevant—as in the supine and prone posture—spinal axis motion enhanced vection.However, perceived scene rigidity varied with head orientation (e.g., dots were seen as floating bubbles), so the issue of scene rigidity was examined by comparing vection in two environments: a rigid pipe structure which looked like a complex arrangement of plumbing pipes, and a field of dots. The results of varying head, motion direction, and perceived scene rigidity, will be discussed and may provide insight into whether self-motion perception is determined by a weighted summation of visual and vestibular signals. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Perception and Performance during NVG-aided Civilian Helicopter Flight.\n \n \n \n \n\n\n \n Allison, R. S., Jennings, S., & Craig, G.\n\n\n \n\n\n\n In 25th Annual Meeting of the Canadian Society for Brain, Behaviour and Cognitive Science (CSBBCS), Canadian Journal of Experimental Psychology, volume 69, pages 348. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Visual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:rt,\n\tabstract = {Civilian operations are an important and growing application of night vision goggles (NVGs). Such devices extend human sensory capabilities but also introduce perceptual artefacts. In a series of laboratory experiments and helicopter-based flight trials we analyzed subject performance on model tasks based on typical civilian aviation applications. In the context of security and search operations the tasks included directed search over open and forested terrain, detection and identification of a temporary landing zone and search/tracking of a moving vehicle marked with a covert IR marker. Two other sets of flight trials explored the potential of night-vision aids in aerial wildfire detection; one was a controlled experiment and the other part of operational aerial detection patrols. The results of these studies confirm that NVGs can provide significant operational value but also illustrate the limitations of the technology and the ability of human operators to compensate for perceptual distortions.},\n\tannote = {Carleton University will host the 25th Annual Meeting of the Canadian Society for Brain, Behaviour and Cognitive Science (CSBBCS) from June 5-7, 2015.},\n\tauthor = {Allison, R. S. and Jennings, S. and Craig, G.},\n\tbooktitle = {25th Annual Meeting of the Canadian Society for Brain, Behaviour and Cognitive Science (CSBBCS), Canadian Journal of Experimental Psychology},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2016-01-03 03:08:06 +0000},\n\tdoi = {10.1037/cep0000076},\n\tjournal = {Canadian Journal of Experimental Psychology},\n\tkeywords = {Night Vision},\n\tnumber = {4},\n\tpages = {348},\n\ttitle = {Visual Perception and Performance during NVG-aided Civilian Helicopter Flight},\n\turl-1 = {http://dx.doi.org/10.1037/cep0000076},\n\tvolume = {69},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1037/cep0000076}}\n\n
\n
\n\n\n
\n Civilian operations are an important and growing application of night vision goggles (NVGs). Such devices extend human sensory capabilities but also introduce perceptual artefacts. In a series of laboratory experiments and helicopter-based flight trials we analyzed subject performance on model tasks based on typical civilian aviation applications. In the context of security and search operations the tasks included directed search over open and forested terrain, detection and identification of a temporary landing zone and search/tracking of a moving vehicle marked with a covert IR marker. Two other sets of flight trials explored the potential of night-vision aids in aerial wildfire detection; one was a controlled experiment and the other part of operational aerial detection patrols. The results of these studies confirm that NVGs can provide significant operational value but also illustrate the limitations of the technology and the ability of human operators to compensate for perceptual distortions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The neural correlates of vection - an fMRI study.\n \n \n \n \n\n\n \n Kirollos, R., Allison, R. S., & Palmisano, S. A.\n\n\n \n\n\n\n In 25th Annual Meeting of the Canadian Society for Brain, Behaviour and Cognitive Science (CSBBCS), Canadian Journal of Experimental Psychology, volume 69, pages 369. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kirollos:kx,\n\tabstract = {Vection is an illusion of visually-induced self-motion in a stationary observer. This functional magnetic resonance imaging (fMRI) study measured psychophysical and blood oxygenation level-dependent (BOLD) responses to two types of visual stimuli: coherent optic flow stimuli  and scrambled versions which preserved local, but disrupted global, motion information.  The coherent optic flow stimuli produced robust percepts of vection while the scrambled stimuli produced little or no vection.  The cingulate sulcus visual area (CSv) showed the clearest selective activation for coherent optic flow compared to incoherent (scrambled) flow suggesting that CSv is heavily involved in self-motion processing.},\n\tauthor = {Kirollos, R. and Allison, R. S. and Palmisano, S. A.},\n\tbooktitle = {25th Annual Meeting of the Canadian Society for Brain, Behaviour and Cognitive Science (CSBBCS), Canadian Journal of Experimental Psychology},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2016-01-03 03:08:35 +0000},\n\tdoi = {10.1037/cep0000076},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {369},\n\ttitle = {The neural correlates of vection - an fMRI study},\n\turl-1 = {http://dx.doi.org/10.1037/cep0000076},\n\tvolume = {69},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1037/cep0000076}}\n\n
\n
\n\n\n
\n Vection is an illusion of visually-induced self-motion in a stationary observer. This functional magnetic resonance imaging (fMRI) study measured psychophysical and blood oxygenation level-dependent (BOLD) responses to two types of visual stimuli: coherent optic flow stimuli and scrambled versions which preserved local, but disrupted global, motion information. The coherent optic flow stimuli produced robust percepts of vection while the scrambled stimuli produced little or no vection. The cingulate sulcus visual area (CSv) showed the clearest selective activation for coherent optic flow compared to incoherent (scrambled) flow suggesting that CSv is heavily involved in self-motion processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoacuity for physically moving targets is unaffected by retinal motion.\n \n \n \n \n\n\n \n Cutone, M., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 15, pages 380. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoacuity-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Cutone:uq,\n\tabstract = {Westheimer and McKee (1978, Journal of the Optical Society of America, 68(4), 450-455) reported that stereoacuity is unaffected by the speed of moving vertical line targets by up to 2 deg/s. Subsequent studies found that thresholds rise exponentially at higher velocities (Ramamurthy, Patel & Bedell, 2005, Vision Research, 45(6), 789-799). This decrease in sensitivity has been attributed to retinal motion smearing; however, these experiments have not taken into account the additional effects of display persistence. Here we reassess the effects of lateral velocity on stereoacuity in the absence of display persistence, using physically moving stimuli. Luminous vertical line targets were mounted on computer-controlled motion stages. This purpose-built system permitted precise control of target position and movement, in three dimensions. In a 1IFC paradigm with 120ms viewing duration, observers fixated a stationary point and discriminated the relative depth of the two moving lines. The velocity of the line pair ranged from 0 (stationary) to 16 deg/s; each speed was tested in a separate block of trials. Our results confirm the resilience of stereoacuity to lateral retinal motion at velocities less than 2 deg/s. At higher speeds, for all observers thresholds increased marginally with speed. The rate of increase was 0.6 arc seconds per deg/s which was approximately 10 times smaller than reported by Ramamurthy et al. (2005). It is clear that stereoacuity is more robust to lateral motion than previously believed; we argue that the threshold elevation reported previously is due to display persistence.},\n\tauthor = {Cutone, M. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2015-09-02 06:49:38 +0000},\n\tdoi = {10.1167/15.12.380},\n\tkeywords = {Stereopsis},\n\tnumber = {12},\n\tpages = {380},\n\ttitle = {Stereoacuity for physically moving targets is unaffected by retinal motion},\n\turl-1 = {http://dx.doi.org/10.1167/15.12.380},\n\tvolume = {15},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1167/15.12.380}}\n\n
\n
\n\n\n
\n Westheimer and McKee (1978, Journal of the Optical Society of America, 68(4), 450-455) reported that stereoacuity is unaffected by the speed of moving vertical line targets by up to 2 deg/s. Subsequent studies found that thresholds rise exponentially at higher velocities (Ramamurthy, Patel & Bedell, 2005, Vision Research, 45(6), 789-799). This decrease in sensitivity has been attributed to retinal motion smearing; however, these experiments have not taken into account the additional effects of display persistence. Here we reassess the effects of lateral velocity on stereoacuity in the absence of display persistence, using physically moving stimuli. Luminous vertical line targets were mounted on computer-controlled motion stages. This purpose-built system permitted precise control of target position and movement, in three dimensions. In a 1IFC paradigm with 120ms viewing duration, observers fixated a stationary point and discriminated the relative depth of the two moving lines. The velocity of the line pair ranged from 0 (stationary) to 16 deg/s; each speed was tested in a separate block of trials. Our results confirm the resilience of stereoacuity to lateral retinal motion at velocities less than 2 deg/s. At higher speeds, for all observers thresholds increased marginally with speed. The rate of increase was 0.6 arc seconds per deg/s which was approximately 10 times smaller than reported by Ramamurthy et al. (2005). It is clear that stereoacuity is more robust to lateral motion than previously believed; we argue that the threshold elevation reported previously is due to display persistence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The neural correlates of vection - an fMRI study.\n \n \n \n \n\n\n \n Kirollos, R., Allison, R. S., & Palmisano, S. A.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 15, pages 1007. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Kirollos:vn,\n\tauthor = {Kirollos, R. and Allison, R. S. and Palmisano, S. A.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2015-09-02 06:51:02 +0000},\n\tdoi = {10.1167/15.12.1007},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {12},\n\tpages = {1007},\n\ttitle = {The neural correlates of vection - an fMRI study},\n\turl-1 = {http://dx.doi.org/10.1167/15.12.1007},\n\tvolume = {15},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1167/15.12.1007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heading Perception with Simulated Visual Defects.\n \n \n \n \n\n\n \n Vinnikov, M., Allison, R. S., & Palmisano, S. A.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 15, pages 1015. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Heading-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:bh,\n\tabstract = {Heading perception depends on the ability of different regions of the visual field to extract accurate information about the direction of the visual flow. Hence due to its ability to extract the most accurate information, the central visual field plays a major role in heading estimation. With experience people learn to utilize other regions especially if there is central field loss/impairment. Nevertheless, it is not clear what happens when information in central vision becomes altered or cannot be picked up. In the present study, we examined the effects of gaze-contingent alteration of regions of the visual field on heading.  On each trial, one of six different directions of self-motion were simulated ( headings $\\pm 7.5^{\\circ}$, $\\pm 5.0^{\\circ}$  and $\\pm 2.5 ^{\\circ}$ from the centre of the screen). The simulated defects were analogous to two typical visual field disturbances resulting from macular degeneration, either metamorphopsia or scotomas. Specifically, with a force choice procedure we compared performance with no visual defects to that with five different simulated defects (either $5^{\\circ}$ or $10^{\\circ}$ horizontal perturbations, $5^{\\circ}$ or $10^{\\circ}$ Gaussian perturbations, or a $10^{\\circ}$ scotoma). We also looked at three gaze conditions - free viewing, directional viewing and tracking features in the scene.  Heading performance was not significantly different in the two environments examined (translation over a plane covered with blue particles or through a forest).  Performance declined in the presence of simulated visual defects, as well as when they were instructed to visually track specific scene features. Performance was most accurate for all heading directions during the free view conditions. We conclude that when people are free to direct their gaze in the scene they are able to minimize the impact of simulated central visual field loss/distortion. },\n\tauthor = {Vinnikov, M. and Allison, R. S. and Palmisano, S. A.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2019-02-03 09:28:15 -0500},\n\tdoi = {10.1167/15.12.1015},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {12},\n\tpages = {1015},\n\ttitle = {Heading Perception with Simulated Visual Defects},\n\turl-1 = {http://dx.doi.org/10.1167/15.12.1015},\n\tvolume = {15},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1167/15.12.1015}}\n\n
\n
\n\n\n
\n Heading perception depends on the ability of different regions of the visual field to extract accurate information about the direction of the visual flow. Hence due to its ability to extract the most accurate information, the central visual field plays a major role in heading estimation. With experience people learn to utilize other regions especially if there is central field loss/impairment. Nevertheless, it is not clear what happens when information in central vision becomes altered or cannot be picked up. In the present study, we examined the effects of gaze-contingent alteration of regions of the visual field on heading. On each trial, one of six different directions of self-motion were simulated ( headings $± 7.5^{∘}$, $± 5.0^{∘}$ and $± 2.5 ^{∘}$ from the centre of the screen). The simulated defects were analogous to two typical visual field disturbances resulting from macular degeneration, either metamorphopsia or scotomas. Specifically, with a force choice procedure we compared performance with no visual defects to that with five different simulated defects (either $5^{∘}$ or $10^{∘}$ horizontal perturbations, $5^{∘}$ or $10^{∘}$ Gaussian perturbations, or a $10^{∘}$ scotoma). We also looked at three gaze conditions - free viewing, directional viewing and tracking features in the scene. Heading performance was not significantly different in the two environments examined (translation over a plane covered with blue particles or through a forest). Performance declined in the presence of simulated visual defects, as well as when they were instructed to visually track specific scene features. Performance was most accurate for all heading directions during the free view conditions. We conclude that when people are free to direct their gaze in the scene they are able to minimize the impact of simulated central visual field loss/distortion. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The influence of scene rigidity and head tilt on vection.\n \n \n \n \n\n\n \n Guterman, P., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstracts), volume 15, pages 862. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:ys,\n\tabstract = {Changing head orientation with respect to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether varying head orientation and optic flow direction relative to gravity affects the perception of visually induced self-motion (vection). We previously found that vection was enhanced when upright observers viewed lamellar flow that moved vertically relative to the head (i.e., simulating self motion along the spinal axis) compared to horizontal flow. We hypothesized that if this benefit was due to aligning the simulated self-motion with gravity, then inter-aural (as opposed to spinal) axis motion while laying on the side would provide a similar vection advantage. Alternatively, motion along the spinal axis could enhance vection regardless of head orientation relative to gravity. Observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern that simulated observer motion along the spinal or inter-aural axis. Vection magnitude estimates, onset, and duration were recorded. The results showed that aligning the optic flow direction with gravity enhanced vection in side-laying observers, but when overlapping these signals was not possible as in the supine and prone posture---spinal axis motion enhanced vection.  However, perceived scene rigidity varied with head orientation (e.g., dots were seen as floating bubbles in some conditions). To examine the issue of scene rigidity, we compared vection during simulated motion with respect to two environments: a rigid pipe structure, which looked like a complex arrangement of plumbing pipes, and a field of dots. The results of varying head and motion direction and perceived scene rigidity will be discussed, and may provide insight into whether self-motion perception is determined by a weighted summation of visual and vestibular inputs. },\n\tauthor = {Guterman, P. and Allison, R. S.},\n\tbooktitle = {Journal of Vision (VSS Abstracts)},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2015-09-02 06:49:43 +0000},\n\tdoi = {10.1167/15.12.862},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {12},\n\tpages = {862},\n\ttitle = {The influence of scene rigidity and head tilt on vection.},\n\turl-1 = {http://dx.doi.org/10.1167/15.12.862},\n\tvolume = {15},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1167/15.12.862}}\n\n
\n
\n\n\n
\n Changing head orientation with respect to gravity changes the dynamic sensitivity of the otoliths to linear accelerations (gravitational and inertial). We explored whether varying head orientation and optic flow direction relative to gravity affects the perception of visually induced self-motion (vection). We previously found that vection was enhanced when upright observers viewed lamellar flow that moved vertically relative to the head (i.e., simulating self motion along the spinal axis) compared to horizontal flow. We hypothesized that if this benefit was due to aligning the simulated self-motion with gravity, then inter-aural (as opposed to spinal) axis motion while laying on the side would provide a similar vection advantage. Alternatively, motion along the spinal axis could enhance vection regardless of head orientation relative to gravity. Observers stood and lay supine, prone, left and right side down, while viewing a translating random dot pattern that simulated observer motion along the spinal or inter-aural axis. Vection magnitude estimates, onset, and duration were recorded. The results showed that aligning the optic flow direction with gravity enhanced vection in side-laying observers, but when overlapping these signals was not possible as in the supine and prone posture—spinal axis motion enhanced vection. However, perceived scene rigidity varied with head orientation (e.g., dots were seen as floating bubbles in some conditions). To examine the issue of scene rigidity, we compared vection during simulated motion with respect to two environments: a rigid pipe structure, which looked like a complex arrangement of plumbing pipes, and a field of dots. The results of varying head and motion direction and perceived scene rigidity will be discussed, and may provide insight into whether self-motion perception is determined by a weighted summation of visual and vestibular inputs. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simulating spatial auditory attention in a gaze contingent display: The virtual cocktail party.\n \n \n \n \n\n\n \n Allison, R. S., & Vinnikov, M.\n\n\n \n\n\n\n In European Conference on Visual Perception, ECVP 2015, volume 44(S1), pages 81. Perception, 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Simulating-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:fk2,\n\tabstract = {The ability to make sense of cluttered auditory environments is convincingly demonstrated in the so-called cocktail party effect. This ability of a listener to separate a speech signal of interest from competing speech signals and background noise is greatly improved when they have normal binaural cues to the spatial location of the speaker. However, in most media applications, including virtual reality and telepresence, the audio information is impoverished. We hypothesized that a listener's spatial auditory attention could be simulated based on visual attention. Since interlocutors typically look at their conversational partner, we used gaze as an indicator of current conversational interest. We built a gaze-contingent display that modified the volume of the speakers' voices contingent on the current region of overt attention. We found that a rapid increase in amplification of the attended speaker combined with attenuation but not elimination of competing sounds (partial rather than absolute selection) was most natural and improved source recognition. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments.},\n\tannote = {The European Conference on Visual Perception (ECVP) will take place between August 23rd and August 27th on the campus of the University of Liverpool.},\n\tauthor = {Allison, R. S. and Vinnikov, M.},\n\tbooktitle = {European Conference on Visual Perception, {ECVP} 2015},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2015-11-17 12:52:56 +0000},\n\tdoi = {10.1177/0301006615598674},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {81},\n\tpublisher = {Perception},\n\ttitle = {Simulating spatial auditory attention in a gaze contingent display: The virtual cocktail party},\n\turl-1 = {http://dx.doi.org/10.1177/0301006615598674},\n\tvolume = {44(S1)},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1177/0301006615598674}}\n\n
\n
\n\n\n
\n The ability to make sense of cluttered auditory environments is convincingly demonstrated in the so-called cocktail party effect. This ability of a listener to separate a speech signal of interest from competing speech signals and background noise is greatly improved when they have normal binaural cues to the spatial location of the speaker. However, in most media applications, including virtual reality and telepresence, the audio information is impoverished. We hypothesized that a listener's spatial auditory attention could be simulated based on visual attention. Since interlocutors typically look at their conversational partner, we used gaze as an indicator of current conversational interest. We built a gaze-contingent display that modified the volume of the speakers' voices contingent on the current region of overt attention. We found that a rapid increase in amplification of the attended speaker combined with attenuation but not elimination of competing sounds (partial rather than absolute selection) was most natural and improved source recognition. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effects of high frame rate on perception of 2-D and 3-D global coherent motion.\n \n \n \n \n\n\n \n Fujii, Y., Allison, R. S., Shen, L., & Wilcox, L. M.\n\n\n \n\n\n\n In Centre for Vision Research International Conference on Perceptual Organization, pages 55. York University, Toronto, June 23-26, 2015 2015.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fujii:2015fk,\n\tabstract = {Digital technologies allow movies to be exhibited at frame rates much higher than the traditional 24 fps. High frame rate (HFR) movies being released in theaters and it is assumed that HFR will reduce artifacts and enhance quality of motion in 2-D and 3-D media. The goal of this project is to assess this assumption empirically by basic measurement of motion perception. In a series of experiments we measured lateral (2-D) and in depth (3-D) global motion coherence thresholds using random-dot patterns in a mirror stereoscope and a 3D projection system. The refresh rate of the display was fixed at 96 Hz, and we manipulated the flash protocol to create 96 (single flash), 48 (double flash) and 24 (quadruple flash) frames per second. Simulated linear velocity of the elements through space was equated in the 2-D and 3-D conditions. Conditions were randomly interleaved using the method of constant stimuli and a two-interval forced-choice procedure to measure the proportion of coherent elements required to reliably detect global motion. Results showed no consistent effect of flash protocol on coherence thresholds in either the 2-D or the 3-D conditions in both the stereoscope and 3D projection system. Our results show that while frame rate influences local 2-D motion processing, it has no apparent impact on global lateral, or in depth, motion coherence perception. This indicates that progression in quality of motion signal does not always enhance perception. },\n\tannote = {June 23-26, 2015 York University},\n\tauthor = {Fujii, Y. and Allison, Robert S. and Shen, L. and Wilcox, Laurie M.},\n\tbooktitle = {Centre for Vision Research International Conference on Perceptual Organization},\n\tdate-added = {2015-06-14 15:20:51 +0000},\n\tdate-modified = {2015-06-23 11:37:53 +0000},\n\tkeywords = {Stereopsis},\n\tlanguage = {en},\n\tmonth = {June 23-26, 2015},\n\tpages = {55},\n\tpublisher = {York University, Toronto},\n\ttitle = {The effects of high frame rate on perception of 2-D and 3-D global coherent motion},\n\turl-1 = {http://dx.doi.org/10.1167/14.15.55},\n\tyear = {2015}}\n\n
\n
\n\n\n
\n Digital technologies allow movies to be exhibited at frame rates much higher than the traditional 24 fps. High frame rate (HFR) movies being released in theaters and it is assumed that HFR will reduce artifacts and enhance quality of motion in 2-D and 3-D media. The goal of this project is to assess this assumption empirically by basic measurement of motion perception. In a series of experiments we measured lateral (2-D) and in depth (3-D) global motion coherence thresholds using random-dot patterns in a mirror stereoscope and a 3D projection system. The refresh rate of the display was fixed at 96 Hz, and we manipulated the flash protocol to create 96 (single flash), 48 (double flash) and 24 (quadruple flash) frames per second. Simulated linear velocity of the elements through space was equated in the 2-D and 3-D conditions. Conditions were randomly interleaved using the method of constant stimuli and a two-interval forced-choice procedure to measure the proportion of coherent elements required to reliably detect global motion. Results showed no consistent effect of flash protocol on coherence thresholds in either the 2-D or the 3-D conditions in both the stereoscope and 3D projection system. Our results show that while frame rate influences local 2-D motion processing, it has no apparent impact on global lateral, or in depth, motion coherence perception. This indicates that progression in quality of motion signal does not always enhance perception. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Evidence that viewers prefer higher frame rate film.\n \n \n \n\n\n \n Wilcox, L., Allison, R. S., Helliker, J., Dunk, A., & Anthony, R.\n\n\n \n\n\n\n In ACM Symposium on Applied Perception (paper was published in a special issue of ACM TAP), September 13-14, 2015 2015. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Wilcox:2015ty,\n\tabstract = {High frame rate movie-making refers to the capture and projection of movies at frame rates several times higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artefacts. However, there is considerable debate in the cinema industry regarding the acceptance of HFR content given anecdotal reports of hyper-realistic imagery that reveals too much set and costume detail.  Despite the potential theoretical advantages, there has been little empirical investigation of the impact of high-frame rate techniques on the viewer experience. In this study we use stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{\\circ}$ to $358^{\\circ}$, to evaluate viewer preferences. In a paired-comparison paradigm we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). The resulting data show a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one measure (motion smoothness) for one clip type. These data are the first empirical evidence of the advantages afforded by high frame rate capture and presentation in a cinema context.\n },\n\tannote = {ACM SIGGRAPH Symposium on Applied Perception\n\nSeptember 13-14, 2015 - Tuebingen, Germany\nAt the Max Planck Institute for Biological Cybernetics},\n\tauthor = {Wilcox, L.M. and Allison, R. S. and Helliker, J. and Dunk, A. and Anthony, R.C.},\n\tbooktitle = {ACM Symposium on Applied Perception (paper was published in a special issue of ACM TAP)},\n\tdate-added = {2015-06-28 11:16:16 +0000},\n\tdate-modified = {2016-01-03 03:25:02 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {September 13-14, 2015},\n\ttitle = {Evidence that viewers prefer higher frame rate film},\n\tyear = {2015}}\n\n
\n
\n\n\n
\n High frame rate movie-making refers to the capture and projection of movies at frame rates several times higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artefacts. However, there is considerable debate in the cinema industry regarding the acceptance of HFR content given anecdotal reports of hyper-realistic imagery that reveals too much set and costume detail. Despite the potential theoretical advantages, there has been little empirical investigation of the impact of high-frame rate techniques on the viewer experience. In this study we use stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from $90^{∘}$ to $358^{∘}$, to evaluate viewer preferences. In a paired-comparison paradigm we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). The resulting data show a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one measure (motion smoothness) for one clip type. These data are the first empirical evidence of the advantages afforded by high frame rate capture and presentation in a cinema context. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of the Impact of High Frame Rates on Legibility in S3D Film.\n \n \n \n \n\n\n \n Marianovski, M., Wilcox, L., & Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, volume SAP '15, pages 67-73, September 13-14, 2015 2015. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n \n \"Evaluation-1\n  \n \n \n \"Evaluation-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Marianovski:2015yu,\n\tabstract = {There is growing interest in capturing and projecting movies at higher frame rates than the traditional 24 frames per second. Yet there has been little scientific assessment of the impact of higher frame rates (HFR) on the perceived quality of cinema content. Here we investigated the effect of frame rate, and associated variables (shutter angle and camera motion) on viewers' ability to discriminate letters in S3D movie clips captured by a professional film crew. The footage was filmed and projected at varying combinations of frame rate, camera speed and shutter angle. Our results showed that, overall, legibility improved with increased frame rate and reduced camera velocity. However, contrary to expectations, there was little effect of shutter angle on legibility. We also show that specific combinations of camera parameters can lead to dramatic reductions in legibility for localized regions in a scene. },\n\tannote = {ACM SIGGRAPH Symposium on Applied Perception\n\nSeptember 13-14, 2015 - Tuebingen, Germany\nAt the Max Planck Institute for Biological Cybernetics},\n\tauthor = {Marianovski, M. and Wilcox, L.M. and Allison, R. S.},\n\tbooktitle = {Proceedings of the {ACM SIGGRAPH} Symposium on Applied Perception},\n\tdate-added = {2015-06-28 11:16:16 +0000},\n\tdate-modified = {2016-01-03 03:20:59 +0000},\n\tdoi = {10.1145/2804408.2804411},\n\tkeywords = {Stereopsis},\n\tmonth = {September 13-14, 2015},\n\tpages = {67-73},\n\ttitle = {Evaluation of the Impact of High Frame Rates on Legibility in {S3D} Film},\n\turl = {http://dl.acm.org/citation.cfm?id=2804408.2804411},\n\turl-1 = {http://dl.acm.org/citation.cfm?id=2804408.2804411},\n\turl-2 = {http://dx.doi.org/10.1145/2804408.2804411},\n\tvolume = {SAP '15},\n\tyear = {2015},\n\turl-1 = {http://dl.acm.org/citation.cfm?id=2804408.2804411},\n\turl-2 = {https://doi.org/10.1145/2804408.2804411}}\n\n
\n
\n\n\n
\n There is growing interest in capturing and projecting movies at higher frame rates than the traditional 24 frames per second. Yet there has been little scientific assessment of the impact of higher frame rates (HFR) on the perceived quality of cinema content. Here we investigated the effect of frame rate, and associated variables (shutter angle and camera motion) on viewers' ability to discriminate letters in S3D movie clips captured by a professional film crew. The footage was filmed and projected at varying combinations of frame rate, camera speed and shutter angle. Our results showed that, overall, legibility improved with increased frame rate and reduced camera velocity. However, contrary to expectations, there was little effect of shutter angle on legibility. We also show that specific combinations of camera parameters can lead to dramatic reductions in legibility for localized regions in a scene. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gait Assessment using the Kinect RGB-D Sensor.\n \n \n \n \n\n\n \n Zhao, J., Bunn, F. E., Perron, J. M., Shen, E., & Allison, R. S.\n\n\n \n\n\n\n In 37th Annual IEEE Engineering in Medicine and Biology Conference, pages 6679 - 6683, August 25-29 2015 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Gait-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Zhao:ve,\n\tabstract = {Patients with concussions, strokes and neuromuscular disease such as Parkinson's disease, often have difficulties in keeping balance and suffer from abnormal gaits. Gait assessment conducted by a physician or therapist in clinics is standard clinical practice for assessing such injuries. However, this approach is subjective, leading to potential problems of unrepeatability, poor sensitivity and unreliability. To conduct the assessment in an objective way, a computer-based gait assessment system is designed and presented in this paper. The system performs assessments on dynamic balance and gaits by analyzing the skeleton frames of a subject captured by the Microsoft Kinect RGB-D sensor. Results show that the proposed system effectively scores subjects.},\n\tannote = {\n37th Annual International Conference of the\nIEEE Engineering in Medicine and Biology Society\nMiCo - Milano Conference Center - Milan, Italy, August 25-29 2015\n},\n\tauthor = {Zhao, J. and Bunn, F. E. and Perron, J. M. and Shen, E. and Allison, R. S.},\n\tbooktitle = {37th Annual {IEEE} Engineering in Medicine and Biology Conference},\n\tdate-added = {2015-06-14 15:26:59 +0000},\n\tdate-modified = {2015-11-17 12:37:57 +0000},\n\tdoi = {10.1109/EMBC.2015.7319925},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {August 25-29 2015},\n\tpages = {6679 - 6683},\n\ttitle = {Gait Assessment using the Kinect {RGB-D} Sensor},\n\turl-1 = {http://dx.doi.org/10.1109/EMBC.2015.7319925},\n\tyear = {2015},\n\turl-1 = {https://doi.org/10.1109/EMBC.2015.7319925}}\n\n
\n
\n\n\n
\n Patients with concussions, strokes and neuromuscular disease such as Parkinson's disease, often have difficulties in keeping balance and suffer from abnormal gaits. Gait assessment conducted by a physician or therapist in clinics is standard clinical practice for assessing such injuries. However, this approach is subjective, leading to potential problems of unrepeatability, poor sensitivity and unreliability. To conduct the assessment in an objective way, a computer-based gait assessment system is designed and presented in this paper. The system performs assessments on dynamic balance and gaits by analyzing the skeleton frames of a subject captured by the Microsoft Kinect RGB-D sensor. Results show that the proposed system effectively scores subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Preference for motion and depth in 3D film.\n \n \n \n \n\n\n \n Hartle, B., Lugtigheid, A. J., Kazimi, A., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Holliman, N. S., Woods, A. J., Favalora, G. E., & Kawai, T., editor(s), IS&T/SPIE Electronic Imaging 2015, Stereoscopic Displays and Applications XXVI, Proc. SPIE, volume 9391, pages 93910R, 1-10, Feb 8-12 2015. \n \n\n\n\n
\n\n\n\n \n \n \"PreferencePaper\n  \n \n \n \"Preference-1\n  \n \n \n \"Preference-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Hartle:2015rr,\n\tabstract = {\tWhile heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Recently there has been considerable research on viewer comfort in 3D media, but little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate observers' preferences for moving S3D film content in a theatre setting. Specifically, we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where observers preferred faster motion. This initially seems contrary to previous research, which shows that slower speeds are more comfortable for viewing S3D content. Since most studies of visual comfort focus on visual fatigue, there may be different underlying influences. Given the apparent discrepancy between the visual comfort literature and the preference results reported here, it is clear that viewer response to S3D film is complex and that decisions made to enhance comfort may in some instances produce less appealing content. },\n\tannote = {Preference for motion and depth in 3D film, as an Oral presentation to be presented 10 February 2015. This conference is part of IS\\&T/SPIE Electronic Imaging 2015 which will be held 8-12 February at the Hilton San Francisco, Union Square, San Francisco, California, United States. \n\nPLEASE SAVE OR PRINT THIS MESSAGE FOR FUTURE REFERENCE AS IT PROVIDES IMPORTANT DETAILS FOR THIS EVENT. \n\nPAPER NUMBER: 9391-23},\n\tauthor = {Hartle, B. and Lugtigheid, Arthur J. and Kazimi, A. and Allison, R. S. and Wilcox, L. M.},\n\tbooktitle = {IS\\&T/SPIE Electronic Imaging 2015, Stereoscopic Displays and Applications XXVI, Proc. SPIE},\n\tdate-added = {2015-01-26 19:18:13 +0000},\n\tdate-modified = {2016-01-03 03:23:54 +0000},\n\tdoi = {10.1117/12.2079330},\n\teditor = {Nicolas S. Holliman and Andrew J. Woods and Gregg E. Favalora and Takashi Kawai},\n\tkeywords = {Stereopsis},\n\tmonth = {Feb 8-12},\n\tpages = {93910R, 1-10},\n\ttitle = {Preference for motion and depth in 3D film},\n\turl = {http://percept.eecs.yorku.ca/papers/Hartle - SDA 2015.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Hartle%20-%20SDA%202015.pdf},\n\tvolume = {9391},\n\tyear = {2015},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Hartle%20-%20SDA%202015.pdf},\n\turl-2 = {https://doi.org/10.1117/12.2079330}}\n\n
\n
\n\n\n
\n While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Recently there has been considerable research on viewer comfort in 3D media, but little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate observers' preferences for moving S3D film content in a theatre setting. Specifically, we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where observers preferred faster motion. This initially seems contrary to previous research, which shows that slower speeds are more comfortable for viewing S3D content. Since most studies of visual comfort focus on visual fatigue, there may be different underlying influences. Given the apparent discrepancy between the visual comfort literature and the preference results reported here, it is clear that viewer response to S3D film is complex and that decisions made to enhance comfort may in some instances produce less appealing content. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Evidence Against an Ecological Explanation of the Jitter Advantage for Vection.\n \n \n \n \n\n\n \n Palmisano, S. A., Allison, R. S., Ash, A., Nakamura, S., & Apthorp, D.\n\n\n \n\n\n\n Frontiers in Psychology Research, Perception Science, 5(Article 1297): 1-9. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"EvidencePaper\n  \n \n \n \"Evidence-1\n  \n \n \n \"Evidence-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{palmisano_evidence_2014,\n\tabstract = {Visual-vestibular conflicts have been traditionally used to explain both perceptions of self-motion and experiences of motion sickness. However, sensory conflict theories have been challenged by findings that adding simulated viewpoint jitter to inducing displays enhances (rather than reduces or destroys) visual illusions of self-motion experienced by stationary observers. One possible explanation of this jitter advantage for vection is that jittering optic flows are more ecological than smooth displays. Despite the intuitive appeal of this idea, it has proven difficult to test. Here we compared subjective experiences generated by jittering and smooth radial flows when observers were exposed to either visual-only or multisensory self-motion stimulations. The display jitter (if present) was generated in real-time by updating the virtual computer-graphics camera position to match the observer's tracked head motions when treadmill walking or walking in place, or was a playback of these head motions when standing still. As expected, the (more naturalistic) treadmill walking and the (less naturalistic) walking in place were found to generate very different physical head jitters. However, contrary to the ecological account of the phenomenon, playbacks of treadmill walking and walking in place display jitter both enhanced visually induced illusions of self-motion to a similar degree (compared to smooth displays). },\n\tauthor = {Palmisano, S. A. and Allison, R. S. and Ash, April and Nakamura, Shinji and Apthorp, Deborah},\n\tdate-added = {2014-10-28 14:20:56 +0000},\n\tdate-modified = {2015-07-10 22:45:36 +0000},\n\tdoi = {10.3389/fpsyg.2014.01297},\n\tjournal = {Frontiers in Psychology Research, Perception Science},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {Article 1297},\n\tpages = {1-9},\n\ttitle = {Evidence Against an Ecological Explanation of the Jitter Advantage for Vection},\n\turl = {http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01297/},\n\turl-1 = {http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01297/abstract},\n\turl-2 = {http://dx.doi.org/10.3389/fpsyg.2014.01297},\n\turldate = {2014-10-28},\n\tvolume = {5},\n\tyear = {2014},\n\turl-1 = {http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01297/},\n\turl-2 = {https://doi.org/10.3389/fpsyg.2014.01297}}\n\n
\n
\n\n\n
\n Visual-vestibular conflicts have been traditionally used to explain both perceptions of self-motion and experiences of motion sickness. However, sensory conflict theories have been challenged by findings that adding simulated viewpoint jitter to inducing displays enhances (rather than reduces or destroys) visual illusions of self-motion experienced by stationary observers. One possible explanation of this jitter advantage for vection is that jittering optic flows are more ecological than smooth displays. Despite the intuitive appeal of this idea, it has proven difficult to test. Here we compared subjective experiences generated by jittering and smooth radial flows when observers were exposed to either visual-only or multisensory self-motion stimulations. The display jitter (if present) was generated in real-time by updating the virtual computer-graphics camera position to match the observer's tracked head motions when treadmill walking or walking in place, or was a playback of these head motions when standing still. As expected, the (more naturalistic) treadmill walking and the (less naturalistic) walking in place were found to generate very different physical head jitters. However, contrary to the ecological account of the phenomenon, playbacks of treadmill walking and walking in place display jitter both enhanced visually induced illusions of self-motion to a similar degree (compared to smooth displays). \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular contributions to linear vection.\n \n \n \n \n\n\n \n Allison, R. S., Ash, A., & Palmisano, S. A.\n\n\n \n\n\n\n Journal of Vision, 14(12): Article 5, 1-23. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"BinocularPaper\n  \n \n \n \"Binocular-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2014rm,\n\tabstract = {Compelling illusions of self motion, known as `vection', can be produced in a stationary observer by visual stimulation alone. The role of binocular vision and stereopsis in these illusions was explored in a series of three experiments. Previous research had provided evidence of stereoscopic enhancements for linear vection in depth (e.g. Palmisano, 1996; 2002).  Here we examined the effects of binocular vision and stereopsis on linear vertical vection for the first time.  Vertical vection was induced by the upward or downward translation of large stereoscopic surfaces.  These surfaces were horizontally-oriented depth corrugations produced by disparity modulation of patterns of persistent or short lifetime dot elements. We found that binocular viewing of such surfaces significantly increased the magnitudes and decreased the onset delays of vertical vection. Experiments utilising short lifetime dot stereograms demonstrated that these particular binocular enhancements of vection were due to the motion of stereoscopically-defined features.},\n\tauthor = {Allison, R. S. and Ash, A. and Palmisano, S. A.},\n\tdate-added = {2014-08-14 16:58:46 +0000},\n\tdate-modified = {2016-01-03 03:22:25 +0000},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {12},\n\tpages = {Article 5, 1-23},\n\ttitle = {Binocular contributions to linear vection},\n\turl = {http://dx.doi.org/10.1167/14.12.5},\n\turl-1 = {http://dx.doi.org/10.1167/14.12.5},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {http://dx.doi.org/10.1167/14.12.5}}\n\n
\n
\n\n\n
\n Compelling illusions of self motion, known as `vection', can be produced in a stationary observer by visual stimulation alone. The role of binocular vision and stereopsis in these illusions was explored in a series of three experiments. Previous research had provided evidence of stereoscopic enhancements for linear vection in depth (e.g. Palmisano, 1996; 2002). Here we examined the effects of binocular vision and stereopsis on linear vertical vection for the first time. Vertical vection was induced by the upward or downward translation of large stereoscopic surfaces. These surfaces were horizontally-oriented depth corrugations produced by disparity modulation of patterns of persistent or short lifetime dot elements. We found that binocular viewing of such surfaces significantly increased the magnitudes and decreased the onset delays of vertical vection. Experiments utilising short lifetime dot stereograms demonstrated that these particular binocular enhancements of vection were due to the motion of stereoscopically-defined features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Aftereffect of motion-in-depth based on binocular cues: effects of adaptation duration, interocular correlation and temporal correlation.\n \n \n \n \n\n\n \n Sakano, Y., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 14(8): article 21, 1-14. 06 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Aftereffect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Sakano:fk,\n\tabstract = {There are at least two possible binocular cues to motion-in-depth, namely disparity change over time and interocular velocity differences. There has been significant controversy about their relative contributions to the perception of motion-in-depth. In the present study, we used the technique of selective adaptation to address this question. In Experiment 1, we found that adaptation to motion-in-depth depicted by temporally correlated random-dot stereograms, which contained coherent interocular velocity difference, produced motion aftereffect in the depth direction irrespective of the adaptors' interocular correlation for any adaptation duration tested. This suggests that coherent changing disparity does not contribute to motion-in-depth adaptation. Because the aftereffect duration did not saturate in the tested range of adaptation duration, it is unlikely that the lack of the effect of changing disparity was due to a ceiling effect. In Experiment 2, we used a novel adaptor that contained a unidirectional coherent interocular velocity difference signal and a bidirectional changing disparity signal that should not induce a motion aftereffect in depth. Following the adaptation, motion aftereffect in depth occurred in the opposite direction to the adaptor's motion-in-depth based on interocular velocity difference. Experiment 3 demonstrated that these results generalized in 12 untrained subjects. These experiments suggest that the contribution of interocular velocity difference to the perception of motion-in-depth is substantial, while that of changing disparity is very limited (if any), at least at the stages of direction-selective mechanisms subject to an aftereffect phenomenon.},\n\tauthor = {Sakano, Y. and Allison, R. S.},\n\tdate-added = {2014-06-17 15:56:54 +0000},\n\tdate-modified = {2014-07-24 23:08:09 +0000},\n\tdoi = {10.1167/14.8.21},\n\tjournal = {Journal of Vision},\n\tkeywords = {Motion in depth, Stereopsis},\n\tmonth = {06},\n\tnumber = {8},\n\tpages = {article 21, 1-14},\n\ttitle = {Aftereffect of motion-in-depth based on binocular cues: effects of adaptation duration, interocular correlation and temporal correlation},\n\turl-1 = {http://dx.doi.org/10.1167/14.8.21},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {https://doi.org/10.1167/14.8.21}}\n\n
\n
\n\n\n
\n There are at least two possible binocular cues to motion-in-depth, namely disparity change over time and interocular velocity differences. There has been significant controversy about their relative contributions to the perception of motion-in-depth. In the present study, we used the technique of selective adaptation to address this question. In Experiment 1, we found that adaptation to motion-in-depth depicted by temporally correlated random-dot stereograms, which contained coherent interocular velocity difference, produced motion aftereffect in the depth direction irrespective of the adaptors' interocular correlation for any adaptation duration tested. This suggests that coherent changing disparity does not contribute to motion-in-depth adaptation. Because the aftereffect duration did not saturate in the tested range of adaptation duration, it is unlikely that the lack of the effect of changing disparity was due to a ceiling effect. In Experiment 2, we used a novel adaptor that contained a unidirectional coherent interocular velocity difference signal and a bidirectional changing disparity signal that should not induce a motion aftereffect in depth. Following the adaptation, motion aftereffect in depth occurred in the opposite direction to the adaptor's motion-in-depth based on interocular velocity difference. Experiment 3 demonstrated that these results generalized in 12 untrained subjects. These experiments suggest that the contribution of interocular velocity difference to the perception of motion-in-depth is substantial, while that of changing disparity is very limited (if any), at least at the stages of direction-selective mechanisms subject to an aftereffect phenomenon.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A computational theory of da Vinci stereopsis.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 14(7): article 5, 1-26. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"A-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin:2014kq,\n\tabstract = {In binocular vision, occlusion of one object by another gives rise to monocular occlusions---regions visible only in one eye. Although binocular disparities cannot be computed for these regions, monocular occlusions can be precisely localized in depth and can induce the perception of illusory occluding surfaces. The phenomenon of depth perception from monocular occlusions, known as da Vinci stereopsis, is intriguing, but its mechanisms are not well understood. We first propose a theory of the mechanisms underlying da Vinci stereopsis that is based on the psychophysical and computational literature on monocular occlusions. It postulates, among other principles, that monocular areas are detected explicitly, and depth from occlusions is calculated based on constraints imposed by occlusion geometry. Next, we describe a biologically inspired computational model based on this theory that successfully reconstructs depth in a large range of stimuli and produces results similar to those described in the psychophysical literature. These results demonstrate that the proposed neural architecture could underpin da Vinci stereopsis and other stereoscopic percepts.},\n\tauthor = {Tsirlin, I. and Wilcox, L. M. and Allison, R. S.},\n\tdate-added = {2014-05-01 20:44:17 +0000},\n\tdate-modified = {2018-11-25 14:34:37 -0500},\n\tdoi = {10.1167/14.7.5},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {7},\n\tpages = {article 5, 1-26},\n\ttitle = {A computational theory of da Vinci stereopsis},\n\turl-1 = {http://dx.doi.org/10.1167/14.7.5},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {https://doi.org/10.1167/14.7.5}}\n\n
\n
\n\n\n
\n In binocular vision, occlusion of one object by another gives rise to monocular occlusions—regions visible only in one eye. Although binocular disparities cannot be computed for these regions, monocular occlusions can be precisely localized in depth and can induce the perception of illusory occluding surfaces. The phenomenon of depth perception from monocular occlusions, known as da Vinci stereopsis, is intriguing, but its mechanisms are not well understood. We first propose a theory of the mechanisms underlying da Vinci stereopsis that is based on the psychophysical and computational literature on monocular occlusions. It postulates, among other principles, that monocular areas are detected explicitly, and depth from occlusions is calculated based on constraints imposed by occlusion geometry. Next, we describe a biologically inspired computational model based on this theory that successfully reconstructs depth in a large range of stimuli and produces results similar to those described in the psychophysical literature. These results demonstrate that the proposed neural architecture could underpin da Vinci stereopsis and other stereoscopic percepts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Use of Night-Vision Devices for Aerial Forest Fire Detection.\n \n \n \n \n\n\n \n Tomkins, L., Benzeroual, T., Milner, A., Zacher, J. E., Ballagh, M., McAlpine, R., Doig, T., Jennings, S., Craig, G., & Allison, R. S.\n\n\n \n\n\n\n International Journal of Wildland Fire, 23(5): 678-685. 05 2014.\n \n\n\n\n
\n\n\n\n \n \n \"UsePaper\n  \n \n \n \"Use-1\n  \n \n \n \"Use-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tomkins:vn,\n\tabstract = {Night-time flight searches using night vision goggles have the potential to improve early aerial detection of forest fires, which could in turn improve suppression effectiveness and reduce costs. Two sets of flight trials explored this potential in an operational context. With a clear line of sight, fires could be seen from many kilometres away (on average 3584 m for controlled point sources and 6678 m for real fires). Observers needed to be nearer to identify a light as a potential source worthy of further investigation. The average discrimination distance, at which a source could be confidently determined to be a fire or other bright light source, was 1193 m (95\\% CI: 944 to 1442 m). The hit rate was 68\\% over the course of the controlled experiment, higher than expectations based on the use of small fire sources and novice observers. The hit rate showed improvement over time, likely because of observers becoming familiar with the task and terrain. Night vision goggles enable sensitive detection of small fires, including those that were very difficult to detect during daytime patrols. The results demonstrate that small fires can be detected and reliably discriminated at night using night vision goggles at distances comparable to those recorded for daytime aerial detection patrols.},\n\tauthor = {Tomkins, L. and Benzeroual, T. and Milner, A. and Zacher, J. E. and Ballagh, M. and McAlpine, R. and Doig, T. and Jennings, S. and Craig, G. and Allison, R. S.},\n\tdate-added = {2014-03-07 12:53:11 +0000},\n\tdate-modified = {2014-09-26 00:24:22 +0000},\n\tdoi = {10.1071/WF13042},\n\tjournal = {International Journal of Wildland Fire},\n\tkeywords = {Night Vision},\n\tmonth = {05},\n\tnumber = {5},\n\tpages = {678-685},\n\ttitle = {Use of Night-Vision Devices for Aerial Forest Fire Detection},\n\turl = {http://percept.eecs.yorku.ca/papers/Fire journal paper final.pdf},\n\turl-1 = {http://dx.doi.org/10.1071/WF13042},\n\tvolume = {23},\n\tyear = {2014},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Fire%20journal%20paper%20final.pdf},\n\turl-2 = {https://doi.org/10.1071/WF13042}}\n\n
\n
\n\n\n
\n Night-time flight searches using night vision goggles have the potential to improve early aerial detection of forest fires, which could in turn improve suppression effectiveness and reduce costs. Two sets of flight trials explored this potential in an operational context. With a clear line of sight, fires could be seen from many kilometres away (on average 3584 m for controlled point sources and 6678 m for real fires). Observers needed to be nearer to identify a light as a potential source worthy of further investigation. The average discrimination distance, at which a source could be confidently determined to be a fire or other bright light source, was 1193 m (95% CI: 944 to 1442 m). The hit rate was 68% over the course of the controlled experiment, higher than expectations based on the use of small fire sources and novice observers. The hit rate showed improvement over time, likely because of observers becoming familiar with the task and terrain. Night vision goggles enable sensitive detection of small fires, including those that were very difficult to detect during daytime patrols. The results demonstrate that small fires can be detected and reliably discriminated at night using night vision goggles at distances comparable to those recorded for daytime aerial detection patrols.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interactions between cues to visual motion in depth.\n \n \n \n \n\n\n \n Howard, I. P., Fujii, Y., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 14(2): Article 14, 1-16. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"InteractionsPaper\n  \n \n \n \"Interactions-1\n  \n \n \n \"Interactions-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Howard:uq,\n\tabstract = {Information about the motion in depth of an object along the midline of a stationary observer is provided by changes in image size (looming), changes in vergence produced by changes in binocular disparity of the images of the object, and changes in relative disparity between the moving object and a stationary object. Each of these cues was independently varied in the dichoptiscope, which is described in Howard, Fukuda, and Allison (2013). The stimuli were a small central dot and a textured surface moving to and fro in depth along the midline. Observers tracked the motion with the unseen hand. Image looming was normal or absent. The change in vergence was absent, normal, more than normal, or reversed relative to normal. Changing relative disparity between the moving stimulus and a stationary surface was present or absent. Changing vergence alone produced no motion in depth for the textured surface but it produced some motion of the dot. Looming alone produced strong motion in depth for the texture but not for the dot. When the direction of motion indicated by looming was opposite that indicated by changing relative disparity observers could use either cue. The cues dissociated rather than combined.},\n\tauthor = {Howard, I. P. and Fujii, Y. and Allison, R. S.},\n\tdate-added = {2013-12-13 01:25:16 +0000},\n\tdate-modified = {2016-01-03 03:21:39 +0000},\n\tdoi = {10.1167/14.2.14},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis, Motion in depth},\n\tnumber = {2},\n\tpages = {Article 14, 1-16},\n\ttitle = {Interactions between cues to visual motion in depth},\n\turl = {http://jov.highwire.org/content/14/2/14.short},\n\turl-1 = {http://jov.highwire.org/content/14/2/14.short},\n\turl-2 = {http://dx.doi.org/10.1167/14.2.14},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {http://jov.highwire.org/content/14/2/14.short},\n\turl-2 = {https://doi.org/10.1167/14.2.14}}\n\n
\n
\n\n\n
\n Information about the motion in depth of an object along the midline of a stationary observer is provided by changes in image size (looming), changes in vergence produced by changes in binocular disparity of the images of the object, and changes in relative disparity between the moving object and a stationary object. Each of these cues was independently varied in the dichoptiscope, which is described in Howard, Fukuda, and Allison (2013). The stimuli were a small central dot and a textured surface moving to and fro in depth along the midline. Observers tracked the motion with the unseen hand. Image looming was normal or absent. The change in vergence was absent, normal, more than normal, or reversed relative to normal. Changing relative disparity between the moving stimulus and a stationary surface was present or absent. Changing vergence alone produced no motion in depth for the textured surface but it produced some motion of the dot. Looming alone produced strong motion in depth for the texture but not for the dot. When the direction of motion indicated by looming was opposite that indicated by changing relative disparity observers could use either cue. The cues dissociated rather than combined.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vergence eye movements are not required for stereoscopic depth perception.\n \n \n \n \n\n\n \n Lugtigheid, A. J., Wilcox, L. M., Allison, R. S., & Howard, I. P.\n\n\n \n\n\n\n Proceedings of the Royal Society B, 281(1776): 20132118.1-20132118.7. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"VergencePaper\n  \n \n \n \"Vergence-1\n  \n \n \n \"Vergence-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Lugtigheid:fk,\n\tabstract = {The brain receives disparate retinal input owing to the separation of the eyes, yet we usually perceive a single fused world. This is because of complex interactions between sensory and oculomotor processes that quickly act to reduce excessive retinal disparity. This implies a strong link between depth perception and fusion, but it is well established that stereoscopic depth percepts are also obtained from stimuli that produce double images. Surprisingly, the nature of depth percepts from such diplopic stimuli remains poorly understood. Specifically, despite long-standing debate it is unclear whether depth under diplopia is owing to the retinal disparity (directly), or whether the brain interprets signals from fusional vergence responses to large disparities (indirectly). Here, we addressed this question using stereoscopic afterimages, for which fusional vergence cannot provide retinal feedback about depth. We showed that observers could reliably recover depth sign and magnitude from diplopic afterimages. In addition, measuring vergence responses to large disparity stimuli revealed that that the sign and magnitude of vergence responses are not systematically related to the target disparity, thus ruling out an indirect explanation of our results. Taken together, our research provides the first conclusive evidence that stereopsis is a direct process, even for diplopic targets.},\n\tauthor = {A. J. Lugtigheid and L. M. Wilcox and R. S. Allison and I. P. Howard},\n\tdate-added = {2013-11-26 12:25:39 +0000},\n\tdate-modified = {2014-09-26 02:31:01 +0000},\n\tdoi = {10.1098/rspb.2013.2118},\n\tjournal = {Proceedings of the Royal Society B},\n\tkeywords = {Stereopsis, Vergence},\n\tnumber = {1776},\n\tpages = {20132118.1-20132118.7},\n\ttitle = {Vergence eye movements are not required for stereoscopic depth perception},\n\turl = {http://rspb.royalsocietypublishing.org/content/281/1776/20132118.short},\n\turl-1 = {http://rspb.royalsocietypublishing.org/content/281/1776/20132118.short},\n\turl-2 = {http://dx.doi.org/10.1098/rspb.2013.2118},\n\tvolume = {281},\n\tyear = {2014},\n\turl-1 = {http://rspb.royalsocietypublishing.org/content/281/1776/20132118.short},\n\turl-2 = {https://doi.org/10.1098/rspb.2013.2118}}\n\n
\n
\n\n\n
\n The brain receives disparate retinal input owing to the separation of the eyes, yet we usually perceive a single fused world. This is because of complex interactions between sensory and oculomotor processes that quickly act to reduce excessive retinal disparity. This implies a strong link between depth perception and fusion, but it is well established that stereoscopic depth percepts are also obtained from stimuli that produce double images. Surprisingly, the nature of depth percepts from such diplopic stimuli remains poorly understood. Specifically, despite long-standing debate it is unclear whether depth under diplopia is owing to the retinal disparity (directly), or whether the brain interprets signals from fusional vergence responses to large disparities (indirectly). Here, we addressed this question using stereoscopic afterimages, for which fusional vergence cannot provide retinal feedback about depth. We showed that observers could reliably recover depth sign and magnitude from diplopic afterimages. In addition, measuring vergence responses to large disparity stimuli revealed that that the sign and magnitude of vergence responses are not systematically related to the target disparity, thus ruling out an indirect explanation of our results. Taken together, our research provides the first conclusive evidence that stereopsis is a direct process, even for diplopic targets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shape constancy measured by a canonical-shape method.\n \n \n \n \n\n\n \n Howard, I. P., Fujii, Y., Allison, R. S., & Kirollos, R.\n\n\n \n\n\n\n Vision Research, 94: 33-40. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"ShapePaper\n  \n \n \n \"Shape-1\n  \n \n \n \"Shape-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Howard:kx,\n\tabstract = {Shape constancy is the ability to perceive that a shape remains the same when seen in different orientations. It has usually been measured by asking subjects to match a shape in the frontal plane with an inclined shape. But this method is subject to ambiguity. In Experiment 1 we used a canonical-shape method, which is not subject to ambiguity. Observers selected from a set of inclined trapezoids the one that most resembled a rectangle (the canonical shape). This task requires subjects to register the linear perspective of the image, and the distance and inclination of the stimulus. For inclinations of 30\\deg and 60\\deg and distances up to 1 m subjects were able to distinguish between a rectangle and a trapezoid tapered 0.4\\deg. As the distance of the stimulus increased to 3 m, linear perspective became increasingly perceived as taper. In Experiment 2 subjects matched the perceived inclination of an inclined rectangle, in which the only cue to inclination was disparity, to the perceived inclination of a rectangle with all depth cues present. As the distance of the stimulus increased, subjects increasingly underestimated the inclination of the rectangle.  We show that this pattern of inclination underestimation explains the distance-dependent bias in taper judgments found in Experiment 1.},\n\tauthor = {Howard, I. P. and Fujii, Y. and Allison, R. S. and Kirollos, R.},\n\tdate-added = {2013-11-11 19:31:02 +0000},\n\tdate-modified = {2014-09-26 00:36:20 +0000},\n\tdoi = {10.1016/j.visres.2013.10.021},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tpages = {33-40},\n\ttitle = {Shape constancy measured by a canonical-shape method},\n\turl = {http://percept.eecs.yorku.ca/papers/shape constancy preprint.pdf},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2013.10.021},\n\tvolume = {94},\n\tyear = {2014},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/shape%20constancy%20preprint.pdf},\n\turl-2 = {https://doi.org/10.1016/j.visres.2013.10.021}}\n\n
\n
\n\n\n
\n Shape constancy is the ability to perceive that a shape remains the same when seen in different orientations. It has usually been measured by asking subjects to match a shape in the frontal plane with an inclined shape. But this method is subject to ambiguity. In Experiment 1 we used a canonical-shape method, which is not subject to ambiguity. Observers selected from a set of inclined trapezoids the one that most resembled a rectangle (the canonical shape). This task requires subjects to register the linear perspective of the image, and the distance and inclination of the stimulus. For inclinations of 30\\deg and 60\\deg and distances up to 1 m subjects were able to distinguish between a rectangle and a trapezoid tapered 0.4\\deg. As the distance of the stimulus increased to 3 m, linear perspective became increasingly perceived as taper. In Experiment 2 subjects matched the perceived inclination of an inclined rectangle, in which the only cue to inclination was disparity, to the perceived inclination of a rectangle with all depth cues present. As the distance of the stimulus increased, subjects increasingly underestimated the inclination of the rectangle. We show that this pattern of inclination underestimation explains the distance-dependent bias in taper judgments found in Experiment 1.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of long-term exposure on sensitivity and comfort with stereoscopic displays.\n \n \n \n \n\n\n \n Stransky, D., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n ACM Transactions on Applied Perception, 11(1): Article 2, 1-14. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n \n \"Effects-1\n  \n \n \n \"Effects-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Stransky:uq,\n\tabstract = {Stereoscopic 3D media has recently increased in appreciation and availability. This popularity has led to concerns over the health effects of habitual viewing of stereoscopic 3D content; concerns that are largely hypothetical. Here we examine the effects of repeated, long-term exposure to stereoscopic 3D in the workplace on several measures of stereoscopic sensitivity (discrimination, depth matching, and fusion limits) along with reported negative symptoms associated with viewing stereoscopic 3D. We recruited a group of adult stereoscopic 3D Industry Experts and compared their performance with observers who were i) inexperienced with stereoscopic 3D, ii) researchers who study stereopsis, and iii) vision researchers with little or no experimental stereoscopic experience. Unexpectedly we found very little difference between the four groups on all but the depth discrimination task, and the differences that did occur appear to reflect task-specific training or experience. Thus we found no positive or negative consequences of repeated and extended exposure to stereoscopic 3D in these populations.},\n\tauthor = {Debi Stransky and L. M. Wilcox and Robert S. Allison},\n\tdate-added = {2013-09-16 22:56:38 +0000},\n\tdate-modified = {2014-09-26 00:40:47 +0000},\n\tdoi = {10.1145/2536810},\n\tjournal = {{ACM} Transactions on Applied Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {1},\n\tpages = {Article 2, 1-14},\n\ttitle = {Effects of long-term exposure on sensitivity and comfort with stereoscopic displays},\n\turl = {http://percept.eecs.yorku.ca/papers/a2-stransky.pdf},\n\turl-1 = {http://dx.doi.org/10.1145/2536810},\n\tvolume = {11},\n\tyear = {2014},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/a2-stransky.pdf},\n\turl-2 = {https://doi.org/10.1145/2536810}}\n\n
\n
\n\n\n
\n Stereoscopic 3D media has recently increased in appreciation and availability. This popularity has led to concerns over the health effects of habitual viewing of stereoscopic 3D content; concerns that are largely hypothetical. Here we examine the effects of repeated, long-term exposure to stereoscopic 3D in the workplace on several measures of stereoscopic sensitivity (discrimination, depth matching, and fusion limits) along with reported negative symptoms associated with viewing stereoscopic 3D. We recruited a group of adult stereoscopic 3D Industry Experts and compared their performance with observers who were i) inexperienced with stereoscopic 3D, ii) researchers who study stereopsis, and iii) vision researchers with little or no experimental stereoscopic experience. Unexpectedly we found very little difference between the four groups on all but the depth discrimination task, and the differences that did occur appear to reflect task-specific training or experience. Thus we found no positive or negative consequences of repeated and extended exposure to stereoscopic 3D in these populations.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n .\n \n \n \n \n\n\n \n Allison, R. S., Wilcox, L. M., & Kazimi, A.\n\n\n \n\n\n\n Perceptual Artefacts, Suspension of Disbelief and Realism in Stereoscopic 3D Film, pages 149-160. Adler, D., Marchessault, J., & Obradovic, S., editor(s). University of Chicago Press/Intellect, 2014.\n \n\n\n\n
\n\n\n\n \n \n \"PerceptualPaper\n  \n \n \n \"Perceptual-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{allison_perceptual_2014,\n\tabstract = {Stereoscopic film has long held an allure as the ultimate in fidelity for cinema and, as such, been a goal for those seeking the most compelling illusion of reality. However, the fundamental and technical limitations of the medium introduce a number of artefacts and imperfections that affect viewer experience.  The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. This paper will discuss a variety of these issues from a perceptual standpoint and their implications for depth perception, visual comfort and sense of scale.  The impact of these perceptual artefacts on the suspension of disbelief and the creation of alternate realities is discussed, as is their deliberate use when artistic considerations demand breaks with realism. \nKeywords: Stereoscopic film, perception, suspension of disbelief, stereopsis, realism\n},\n\tauthor = {Allison, R. S. and Wilcox, L. M. and Kazimi, Ali},\n\tbooktitle = {{3D} Cinema and Beyond (chapter is reprinted from an article in Public)},\n\tdate-added = {2013-10-06 17:09:39 +0000},\n\tdate-modified = {2016-01-03 02:31:07 +0000},\n\teditor = {Adler, Dan and Marchessault, Janine and Obradovic, Sanja},\n\tisbn = {9781783200399},\n\tkeywords = {Stereopsis},\n\tpages = {149-160},\n\tpublisher = {University of Chicago {Press/Intellect}},\n\ttitle = {Perceptual Artefacts, Suspension of Disbelief and Realism in Stereoscopic {3D} Film},\n\turl = {https://percept.eecs.yorku.ca/papers/Public Journal.pdf},\n\tyear = {2014},\n\turl-1 = {http://www.press.uchicago.edu/ucp/books/book/distributed/Other/bo16816844.html}}\n\n
\n
\n\n\n
\n Stereoscopic film has long held an allure as the ultimate in fidelity for cinema and, as such, been a goal for those seeking the most compelling illusion of reality. However, the fundamental and technical limitations of the medium introduce a number of artefacts and imperfections that affect viewer experience. The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. This paper will discuss a variety of these issues from a perceptual standpoint and their implications for depth perception, visual comfort and sense of scale. The impact of these perceptual artefacts on the suspension of disbelief and the creation of alternate realities is discussed, as is their deliberate use when artistic considerations demand breaks with realism. Keywords: Stereoscopic film, perception, suspension of disbelief, stereopsis, realism \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (10)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Illusory scene shearing during real and illusory self-rotations in roll.\n \n \n \n \n\n\n \n Palmisano, S. A., Allison, R. S., & Howard, I. P.\n\n\n \n\n\n\n In Asia-Pacific Conference on Vision, i-Perception, volume 5, pages 437. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"IllusoryPaper\n  \n \n \n \"Illusory-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2014mz,\n\tabstract = {Contrary to long-held assumptions, perceived scene rigidity is not essential for visually induced illusions of self-motion (i.e. vection).  Roll vection can be induced by rotating a large homogeneous textured display relative to the upright observer.  Under these conditions, the continuous roll vection experienced is paradoxically accompanied by maximum perceived self-tilts of less than 20 degrees (e.g. Howard, Cheung & Landolt, 1989).  By contrast, Ian Howard's fully furnished tumbling room apparatus can induce highly compelling illusions of 360 degree (i.e. head-over-heels) self-rotation.  We have found that both real and illusory tumbling in his room are accompanied by dramatic illusory scene distortions (scenery near the observer's fixation location sometimes leads and other times lags their more peripheral scenery).  The fact that these scene distortion and self-motion illusions co-occur so successfully is both intriguing and a major challenge to existing theories of self-motion perception.  Our research has eliminated explanations of these illusory scene shearing effects based on eye-movements, distance misperception, peripheral aliasing, differential motion sensitivity and adaptation.  Intriguingly we have consistently found that perceived head-over-heels tumbling (either real or illusory) is the essential prerequisite for the scene shearing illusion.},\n\tannote = {APCV 2014 July 2014},\n\tauthor = {Palmisano, S. A. and Allison, R. S. and Howard, I. P.},\n\tbooktitle = {Asia-Pacific Conference on Vision, i-Perception},\n\tdate-added = {2014-09-08 14:16:27 +0000},\n\tdate-modified = {2014-09-09 18:56:18 +0000},\n\tjournal = {i-Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {437},\n\ttitle = {Illusory scene shearing during real and illusory self-rotations in roll},\n\turl = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14s.pdf},\n\turl-1 = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14s.pdf},\n\tvolume = {5},\n\tyear = {2014},\n\turl-1 = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14s.pdf}}\n\n
\n
\n\n\n
\n Contrary to long-held assumptions, perceived scene rigidity is not essential for visually induced illusions of self-motion (i.e. vection). Roll vection can be induced by rotating a large homogeneous textured display relative to the upright observer. Under these conditions, the continuous roll vection experienced is paradoxically accompanied by maximum perceived self-tilts of less than 20 degrees (e.g. Howard, Cheung & Landolt, 1989). By contrast, Ian Howard's fully furnished tumbling room apparatus can induce highly compelling illusions of 360 degree (i.e. head-over-heels) self-rotation. We have found that both real and illusory tumbling in his room are accompanied by dramatic illusory scene distortions (scenery near the observer's fixation location sometimes leads and other times lags their more peripheral scenery). The fact that these scene distortion and self-motion illusions co-occur so successfully is both intriguing and a major challenge to existing theories of self-motion perception. Our research has eliminated explanations of these illusory scene shearing effects based on eye-movements, distance misperception, peripheral aliasing, differential motion sensitivity and adaptation. Intriguingly we have consistently found that perceived head-over-heels tumbling (either real or illusory) is the essential prerequisite for the scene shearing illusion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion discrimination of high frame rate movie.\n \n \n \n \n\n\n \n Shen, L., Allison, R. S., Wilcox, L. M., & Fujii, Y.\n\n\n \n\n\n\n In OSA Fall Vision 2014, Journal of Vision, volume 14, pages article 57. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Motion-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Shen:2014rz,\n\tabstract = {Recently high-frame rate movie technology has received significant technical and artistic attention due to its potential to present higher-fidelity motion to cinemagoers. Speed discrimination is a well-studied psychophysical task used to quantify sensitivity to motion. We used speed discrimination as a measure of the effects of frame presentation protocol on motion perception. An interleaved staircase procedure was used with a 2-interval-forced-choice task to measure discrimination thresholds for 7 subjects. The independent variables were frame rate and motion speed for a high-contrast line target. Flash (refresh) rate was fixed at 96 Hz and different frame rates were produced by updating the frame every refresh (single flash, 96 fps), alternate refresh (double flash, 48 fps) or every fourth refresh (quadruple flash, 24 fps). Stimuli were presented binocularly on CRT displays in a Wheatstone stereoscope but the presentation protocols approximate standard film presentation protocols. Five velocities (4deg/s, 8deg/s, 16deg/s, 32deg/s and 64deg/s) were tested in separate blocks of trials; within a block staircases for the three flash protocols were randomly interleaved. The results show that at speeds greater than 16deg/s, discrimination thresholds decrease with increasing frame rate (or equivalently, increase with number of repeated frames for a given flash protocol). This improvement likely reflects sensitivity to motion artifacts at low frame rates, when frames are repeated multiple times. Thus this study confirms that observers are sensitive to the improved fidelity offered by higher frame rates over the range considered for high frame rate cinema (24--96 fps). },\n\tannote = {Fall Vision Meeting Oct 10-12, 2014, Philidelphia PA},\n\tauthor = {Shen, L. and Allison, Robert S. and Wilcox, Laurie M. and Fujii, Y.},\n\tbooktitle = {OSA Fall Vision 2014, Journal of Vision},\n\tdate-added = {2014-09-08 14:16:00 +0000},\n\tdate-modified = {2015-01-05 00:02:35 +0000},\n\tdoi = {10.1167/14.15.57},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tlanguage = {en},\n\tnumber = {15},\n\tpages = {article 57},\n\ttitle = {Motion discrimination of high frame rate movie},\n\turl-1 = {http://dx.doi.org/10.1167/14.15.57},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {https://doi.org/10.1167/14.15.57}}\n\n
\n
\n\n\n
\n Recently high-frame rate movie technology has received significant technical and artistic attention due to its potential to present higher-fidelity motion to cinemagoers. Speed discrimination is a well-studied psychophysical task used to quantify sensitivity to motion. We used speed discrimination as a measure of the effects of frame presentation protocol on motion perception. An interleaved staircase procedure was used with a 2-interval-forced-choice task to measure discrimination thresholds for 7 subjects. The independent variables were frame rate and motion speed for a high-contrast line target. Flash (refresh) rate was fixed at 96 Hz and different frame rates were produced by updating the frame every refresh (single flash, 96 fps), alternate refresh (double flash, 48 fps) or every fourth refresh (quadruple flash, 24 fps). Stimuli were presented binocularly on CRT displays in a Wheatstone stereoscope but the presentation protocols approximate standard film presentation protocols. Five velocities (4deg/s, 8deg/s, 16deg/s, 32deg/s and 64deg/s) were tested in separate blocks of trials; within a block staircases for the three flash protocols were randomly interleaved. The results show that at speeds greater than 16deg/s, discrimination thresholds decrease with increasing frame rate (or equivalently, increase with number of repeated frames for a given flash protocol). This improvement likely reflects sensitivity to motion artifacts at low frame rates, when frames are repeated multiple times. Thus this study confirms that observers are sensitive to the improved fidelity offered by higher frame rates over the range considered for high frame rate cinema (24–96 fps). \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effects of frame rate on 2-D and 3-D global motion processing.\n \n \n \n \n\n\n \n Fujii, Y., Allison, R. S., Shen, L., & Wilcox, L. M.\n\n\n \n\n\n\n In OSA Fall Vision 2014, Journal of Vision, volume 14, pages article 55. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fujii:2014ty,\n\tabstract = {Recent advances in film capture and display technologies have made it possible to use frame rates much higher than the 24fps convention. It is assumed that high frame rates (HFR) will enhance perception of motion in 2-D and 3-D media. The goal of this project is to assess this assumption empirically. In a series of experiments we measured lateral (2-D) and in depth (3-D) global motion coherence thresholds using random-dot patterns in a mirror stereoscope and a 3D projection system. The refresh rate of the display was fixed at 96 Hz, and we manipulated the flash protocol to create 96fps (single flash), 48fps (double flash) and 24fps (quadruple flash). Simulated linear velocity of the elements through space was equated in the 2-D and 3-D conditions. Conditions were randomly interleaved using the method of constant stimuli and a two-interval forced-choice procedure to measure the proportion of coherent elements required to reliably detect global motion. Results from six observers showed no consistent effect of flash protocol on coherence thresholds in either the 2-D or the 3-D test conditions. Interestingly, the 3-D task was considerably harder for all observers and required longer viewing time. Even with the increased viewing time, thresholds were double those seen in the lateral motion condition, despite the fact that the velocity of element motion through space was the same in the two conditions. Our results show that while frame rate influences local 2-D motion processing, it has no apparent impact on lateral, or in depth, global motion perception.},\n\tannote = {Fall Vision Meeting Oct 10-12, 2014, Philidelphia PA},\n\tauthor = {Fujii, Y. and Allison, Robert S. and Shen, L. and Wilcox, Laurie M.},\n\tbooktitle = {OSA Fall Vision 2014, Journal of Vision},\n\tdate-added = {2014-09-08 14:16:00 +0000},\n\tdate-modified = {2015-01-05 00:01:32 +0000},\n\tdoi = {10.1167/14.15.55},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tlanguage = {en},\n\tnumber = {15},\n\tpages = {article 55},\n\ttitle = {The effects of frame rate on 2-D and 3-D global motion processing},\n\turl-1 = {http://dx.doi.org/10.1167/14.15.55},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {https://doi.org/10.1167/14.15.55}}\n\n
\n
\n\n\n
\n Recent advances in film capture and display technologies have made it possible to use frame rates much higher than the 24fps convention. It is assumed that high frame rates (HFR) will enhance perception of motion in 2-D and 3-D media. The goal of this project is to assess this assumption empirically. In a series of experiments we measured lateral (2-D) and in depth (3-D) global motion coherence thresholds using random-dot patterns in a mirror stereoscope and a 3D projection system. The refresh rate of the display was fixed at 96 Hz, and we manipulated the flash protocol to create 96fps (single flash), 48fps (double flash) and 24fps (quadruple flash). Simulated linear velocity of the elements through space was equated in the 2-D and 3-D conditions. Conditions were randomly interleaved using the method of constant stimuli and a two-interval forced-choice procedure to measure the proportion of coherent elements required to reliably detect global motion. Results from six observers showed no consistent effect of flash protocol on coherence thresholds in either the 2-D or the 3-D test conditions. Interestingly, the 3-D task was considerably harder for all observers and required longer viewing time. Even with the increased viewing time, thresholds were double those seen in the lateral motion condition, despite the fact that the velocity of element motion through space was the same in the two conditions. Our results show that while frame rate influences local 2-D motion processing, it has no apparent impact on lateral, or in depth, global motion perception.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active task measurements of tolerance to stereoscopic 3D image distortion.\n \n \n \n \n\n\n \n Allison, R. S., Benzeroual, K., & Wilcox, L. M.\n\n\n \n\n\n\n In Asia-Pacific Conference on Vision, i-Perception, volume 5, pages 377. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n \n \"Active-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2014qd,\n\tabstract = {An intriguing aspect of picture perception is the viewer's tolerance to modest variation in viewing position, perspective, and display size. In stereoscopic media, additional parameters control the relative position and orientation of the cameras.  The amount of estimated depth from disparity can be obtained trigonometrically; however, perceived depth in complex scenes differs from geometrical predictions. It is not clear to what extent these differences are due to cognitive as opposed to perceptual factors. We recorded stereoscopic movies of an indoor scene with a range of inter-axial (IA) camera separation between 3 and 95 mm and displayed them on a range of screen sizes (all subtending 36 deg). Participants reproduced the depth between pairs of objects in the scene using reaching (3.5'' screen) or blind walking (54''  and 22''  screens). The effect of IA and screen size (and thus distance) was much smaller than predicted suggesting that observers compensate for distortion in the portrayed scene. These results mirror those obtained previously with depth magnitude estimation (Benzeroual et al., ECVP 2011). We conclude that multiple realistic depth cues drive normalization of perceived depth from binocular disparity and that these processes are not specific to either `perception' or `action' oriented tasks. },\n\tannote = {APCV 2014 July 2014},\n\tauthor = {Allison, R. S. and Benzeroual, K. and Wilcox, L. M.},\n\tbooktitle = {Asia-Pacific Conference on Vision, i-Perception},\n\tdate-added = {2014-09-08 14:16:00 +0000},\n\tdate-modified = {2014-09-09 18:55:51 +0000},\n\tjournal = {i-Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {4},\n\tpages = {377},\n\ttitle = {Active task measurements of tolerance to stereoscopic 3D image distortion},\n\turl = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14a.pdf},\n\turl-1 = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14a.pdf},\n\tvolume = {5},\n\tyear = {2014},\n\turl-1 = {http://i-perception.perceptionweb.com/fulltext/i05/apcv14a.pdf}}\n\n
\n
\n\n\n
\n An intriguing aspect of picture perception is the viewer's tolerance to modest variation in viewing position, perspective, and display size. In stereoscopic media, additional parameters control the relative position and orientation of the cameras. The amount of estimated depth from disparity can be obtained trigonometrically; however, perceived depth in complex scenes differs from geometrical predictions. It is not clear to what extent these differences are due to cognitive as opposed to perceptual factors. We recorded stereoscopic movies of an indoor scene with a range of inter-axial (IA) camera separation between 3 and 95 mm and displayed them on a range of screen sizes (all subtending 36 deg). Participants reproduced the depth between pairs of objects in the scene using reaching (3.5'' screen) or blind walking (54'' and 22'' screens). The effect of IA and screen size (and thus distance) was much smaller than predicted suggesting that observers compensate for distortion in the portrayed scene. These results mirror those obtained previously with depth magnitude estimation (Benzeroual et al., ECVP 2011). We conclude that multiple realistic depth cues drive normalization of perceived depth from binocular disparity and that these processes are not specific to either `perception' or `action' oriented tasks. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Size matters: Perceived depth magnitude varies with stimulus height.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 14, pages 977. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"SizePaper\n  \n \n \n \"Size-1\n  \n \n \n \"Size-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tsirlin22082014,\n\tabstract = {Stereoscopic acuity is known to vary with the overall size and width of the target. Recently, Tsirlin et al. (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of the stimulus. To test this hypothesis we compared perceived depth using small discs versus long bars with equivalent width and disparity. We used three estimation techniques. The first two, a virtual ruler and a touch-sensor (for haptic estimates), required that observers make quantitative judgements of depth differences between objects. The third method was a conventional disparity probe. This last technique, while often used for depth estimation, is a measure of disparity matching rather than quantitative depth perception. We found that depth estimates collected using the virtual ruler and the touch-sensor were significantly larger for the bar stimuli than for the disc stimuli. The disparity probe method yielded the same disparity estimates for both types of stimulus; which was not surprising given that they had the same relative disparity. In a second experiment, we measured perceived depth, using the virtual ruler, as a function of the height of a thin bar. In agreement with the first experiment, we found that perceived depth increased with increasing bar height. The dependence of perceived depth on the height of the stimulus is likely the result of the integration of disparity along the vertical edges, which enhances the reliability of depth estimation. The observed reduction in the magnitude of depth estimates for less reliable disparity signals may reflect a reweighting of depth cues or the expression of a bias towards small-disparities. Our results also underscore the often-overlooked difference between measurements of depth and disparity, as the effect of target height was obscured when the disparity probe was used.Meeting abstract presented at VSS 2014},\n\tannote = {Vision Sciences Society 2014, St. Petersburg},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie and Allison, Robert},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-modified = {2015-01-05 00:03:19 +0000},\n\tdoi = {10.1167/14.10.977},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {10},\n\tpages = {977},\n\ttitle = {Size matters: Perceived depth magnitude varies with stimulus height},\n\turl = {http://www.journalofvision.org/content/14/10/977.abstract},\n\turl-1 = {http://www.journalofvision.org/content/14/10/977.abstract},\n\turl-2 = {http://dx.doi.org/10.1167/14.10.977},\n\tvolume = {14},\n\tyear = {2014},\n\turl-1 = {http://www.journalofvision.org/content/14/10/977.abstract},\n\turl-2 = {https://doi.org/10.1167/14.10.977}}\n\n
\n
\n\n\n
\n Stereoscopic acuity is known to vary with the overall size and width of the target. Recently, Tsirlin et al. (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of the stimulus. To test this hypothesis we compared perceived depth using small discs versus long bars with equivalent width and disparity. We used three estimation techniques. The first two, a virtual ruler and a touch-sensor (for haptic estimates), required that observers make quantitative judgements of depth differences between objects. The third method was a conventional disparity probe. This last technique, while often used for depth estimation, is a measure of disparity matching rather than quantitative depth perception. We found that depth estimates collected using the virtual ruler and the touch-sensor were significantly larger for the bar stimuli than for the disc stimuli. The disparity probe method yielded the same disparity estimates for both types of stimulus; which was not surprising given that they had the same relative disparity. In a second experiment, we measured perceived depth, using the virtual ruler, as a function of the height of a thin bar. In agreement with the first experiment, we found that perceived depth increased with increasing bar height. The dependence of perceived depth on the height of the stimulus is likely the result of the integration of disparity along the vertical edges, which enhances the reliability of depth estimation. The observed reduction in the magnitude of depth estimates for less reliable disparity signals may reflect a reweighting of depth cues or the expression of a bias towards small-disparities. Our results also underscore the often-overlooked difference between measurements of depth and disparity, as the effect of target height was obscured when the disparity probe was used.Meeting abstract presented at VSS 2014\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Howard's Devices.\n \n \n \n \n\n\n \n Allison, R. S.\n\n\n \n\n\n\n In 37th European Conference on Visual Perception, volume 43 (Suppl.), pages 3. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Howard'sPaper\n  \n \n \n \"Howard's-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2014lq,\n\tabstract = {Ian Howard loved optomechanical or electromechanical solutions for experimental\nstimuli and apparatus, an approach that is rare nowadays. Sometimes the device was\nthe stimulus itself, as with the tumbling room, and in others the electromechanical\nsystem moved conventional stimuli, as in the dichoptiscope. This approach often\nallowed for precision, realism and fidelity not possible with computer-generated\ndisplays although it required skill at mechanical design and construction. Having\ncollaborated with Ian on building several devices, I will review Ian's approach to the\ndesign of such apparatus, in particular highlighting some notable devices used to study\nbinocular vision. I will also discuss what we can learn from Ian's approach in the light\nof new rapid-prototyping and manufacturing technologies for producing precise and\neasily constructed mechanical devices.},\n\tauthor = {Allison, R. S.},\n\tbooktitle = {37th European Conference on Visual Perception},\n\tdate-added = {2014-08-28 17:08:35 +0000},\n\tdate-modified = {2014-09-09 18:54:29 +0000},\n\tjournal = {Perception},\n\tkeywords = {Misc.},\n\tpages = {3},\n\ttitle = {Howard's Devices},\n\turl = {http://www.perceptionweb.com/abstract.cgi?id=v1424030},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v1424030},\n\tvolume = {43 (Suppl.)},\n\tyear = {2014},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v1424030}}\n\n
\n
\n\n\n
\n Ian Howard loved optomechanical or electromechanical solutions for experimental stimuli and apparatus, an approach that is rare nowadays. Sometimes the device was the stimulus itself, as with the tumbling room, and in others the electromechanical system moved conventional stimuli, as in the dichoptiscope. This approach often allowed for precision, realism and fidelity not possible with computer-generated displays although it required skill at mechanical design and construction. Having collaborated with Ian on building several devices, I will review Ian's approach to the design of such apparatus, in particular highlighting some notable devices used to study binocular vision. I will also discuss what we can learn from Ian's approach in the light of new rapid-prototyping and manufacturing technologies for producing precise and easily constructed mechanical devices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A celebration of the life and scientific work of Ian Howard.\n \n \n \n \n\n\n \n Rogers, B. J., Allison, R. S., & Palmisano, S. A.\n\n\n \n\n\n\n In 37th European Conference on Visual Perception, volume 43 (Suppl.), pages 2. 2014.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n \n \"A-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:kx,\n\tabstract = {Ian Porteus Howard (1927-2013) had a remarkable academic career spanning over 60 years that started with his initial appointment at the University of Durham in 1952. He is probably best known for his outstanding books -- Human Spatial Orientation (1966) (with Brian Templeton), through Human Visual Orientation (1982), Binocular Vision and Stereopsis (1995), the 2 volumes of Seeing in Depth (2002) and finally the 3 volumes of Perceiving in Depth (2012). Ian was also a talented experimentalist and the creator and builder of many novel pieces of experimental equipment including his rotating sphere and rotating room. Over the six decades he worked on a wide variety of research topics together with many graduate students, post-docs and researchers from Canada, USA, UK, Japan and Australia.},\n\tauthor = {Rogers, B. J. and Allison, R. S. and Palmisano, S. A.},\n\tbooktitle = {37th European Conference on Visual Perception},\n\tdate-added = {2014-08-28 17:08:35 +0000},\n\tdate-modified = {2014-09-09 18:55:24 +0000},\n\tjournal = {Perception},\n\tkeywords = {Misc.},\n\tpages = {2},\n\ttitle = {A celebration of the life and scientific work of Ian Howard},\n\turl = {http://www.perceptionweb.com/abstract.cgi?id=v1424500},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v1424500},\n\tvolume = {43 (Suppl.)},\n\tyear = {2014},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v1424500}}\n\n
\n
\n\n\n
\n Ian Porteus Howard (1927-2013) had a remarkable academic career spanning over 60 years that started with his initial appointment at the University of Durham in 1952. He is probably best known for his outstanding books – Human Spatial Orientation (1966) (with Brian Templeton), through Human Visual Orientation (1982), Binocular Vision and Stereopsis (1995), the 2 volumes of Seeing in Depth (2002) and finally the 3 volumes of Perceiving in Depth (2012). Ian was also a talented experimentalist and the creator and builder of many novel pieces of experimental equipment including his rotating sphere and rotating room. Over the six decades he worked on a wide variety of research topics together with many graduate students, post-docs and researchers from Canada, USA, UK, Japan and Australia.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Head orientation influences the perceived tilt of global motion.\n \n \n \n\n\n \n Guterman, P., & Allison, R. S.\n\n\n \n\n\n\n In Canadian Society for Brain, Behaviour and Cognitive Science 24th Annual Meeting (CSBBCS), CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY, volume 68, pages 306. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2014fk,\n\tabstract = {Aubert's (1861, Arch Pathol Anat, 20: 381-393) finding that a vertical line is perceived as tilted in tilted observers (``A-effect'') was tested using moving stimuli. Observers judged the tilt of a line and global motion while standing or laying on side. Postural effects were consistent with the A-effect, and when lying down shifts in the point of subjective equality were significantly smaller for motion (95\\%CI: 2D = -11.49 +/- 5.86 deg., 3D= -17.08 +/- 4.77 deg.) than the line (95\\%CI: -23 +/- 4.76 deg.). Findings will be discussed in terms of their implications for sensory integration.},\n\tannote = {Ryerson July 3-5, 2014\n\n24TH ANNUAL MEETING\nRYERSON UNIVERSITY\nJULY 3RD -- JULY 5TH 2014\nTORONTO, ONTARIO\n68\nIssue\n4\nPages\n306-306},\n\tauthor = {Guterman, P.S. and Allison, R. S.},\n\tbooktitle = {Canadian Society for Brain, Behaviour and Cognitive Science 24th Annual Meeting (CSBBCS), CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY},\n\tdate-added = {2014-08-14 17:10:17 +0000},\n\tdate-modified = {2015-03-10 23:10:53 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {306},\n\ttitle = {Head orientation influences the perceived tilt of global motion},\n\tvolume = {68},\n\tyear = {2014}}\n\n
\n
\n\n\n
\n Aubert's (1861, Arch Pathol Anat, 20: 381-393) finding that a vertical line is perceived as tilted in tilted observers (``A-effect'') was tested using moving stimuli. Observers judged the tilt of a line and global motion while standing or laying on side. Postural effects were consistent with the A-effect, and when lying down shifts in the point of subjective equality were significantly smaller for motion (95%CI: 2D = -11.49 +/- 5.86 deg., 3D= -17.08 +/- 4.77 deg.) than the line (95%CI: -23 +/- 4.76 deg.). Findings will be discussed in terms of their implications for sensory integration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Retinal Motion and Stereoacuity Revisited.\n \n \n \n\n\n \n Cutone, M, Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Canadian Society for Brain, Behaviour and Cognitive Science 24th Annual Meeting (CSBBCS), CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY, volume 68, pages 306. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Cutone:2014vn,\n\tabstract = {\nStereoacuity of moving line targets presented using CRT displays are reportedly unaffected by lateral motion up to 2.5 deg/s. Here we re-assess the effects of lateral retinal motion on stereoacuity using custom built hardware that permits precise timing and movement of 'real' targets. Observers were asked to indicate the relative depth of two real vertically aligned luminous lines. We varied motion velocity and exposure time; there was no effect of either on performance. We conclude it is likely that the observed resilience to retinal motion reflects the rapid acquisition of the disparity signal, not the properties of the display system.  \n},\n\tannote = {Ryerson July 3-5, 2014\n\n\n24TH ANNUAL MEETING\nRYERSON UNIVERSITY\nJULY 3RD -- JULY 5TH 2014\nTORONTO, ONTARIO\n},\n\tauthor = {Cutone, M and Wilcox, L. M. and Allison, R. S.},\n\tbooktitle = {Canadian Society for Brain, Behaviour and Cognitive Science 24th Annual Meeting (CSBBCS), CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY},\n\tdate-added = {2014-08-14 17:10:17 +0000},\n\tdate-modified = {2015-09-03 17:25:19 +0000},\n\tkeywords = {Stereopsis},\n\tnumber = {4},\n\tpages = {306},\n\ttitle = {Retinal Motion and Stereoacuity Revisited},\n\tvolume = {68},\n\tyear = {2014}}\n\n
\n
\n\n\n
\n Stereoacuity of moving line targets presented using CRT displays are reportedly unaffected by lateral motion up to 2.5 deg/s. Here we re-assess the effects of lateral retinal motion on stereoacuity using custom built hardware that permits precise timing and movement of 'real' targets. Observers were asked to indicate the relative depth of two real vertically aligned luminous lines. We varied motion velocity and exposure time; there was no effect of either on performance. We conclude it is likely that the observed resilience to retinal motion reflects the rapid acquisition of the disparity signal, not the properties of the display system. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Postural effects onthe perceived tilt of a line and global motion.\n \n \n \n\n\n \n Guterman, P., & Allison, R. S.\n\n\n \n\n\n\n In International Multisensory Research Forum, pages 112. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2014sf,\n\tabstract = {Aubert's (1861, Arch Pathol Anat, 20: 381-393) finding that a vertical line is perceived as tilted in tilted observers demonstrated how percepts of verticality rely on the integration of multiple sensory systems. This phenomenon has been studied extensively using static stimuli. Global motion processing may play an important role in sensory integration, so here we follow up on our earlier report (VSS 2013) and explored whether this tilt occurs when viewing global motion displays. Observers stood and lay left side down while viewing a static line and random-dot display of 2D (planar) or 3D (volumetric) global motion. For each posture and motion type, a forced-choice staircase procedure determined the tilt of the stimulus that appeared subjectively vertical (PSE). Consistent with Aubert's A-effect and our earlier results using the method of constant stimuli, shifts were significantly greater when lying on the side than standing, and in the direction of the head tilt. In the lying position, the PSE shift was significantly smaller for the global motion stimuli (95\\%CI: 2D = -11.49 +/- 5.86 deg., 3D = -17.08 +/- 4.77 deg.) than the line (95\\%CI: -23 +/- 4.76 deg.). A control experiment using single and multiple line displays eliminated eccentricity and density as potential explanations for these differences. We will discuss these findings in terms of their implications for sensory integration and mapping of spatial reference frames.},\n\tannote = {amsterdam june 11-15, 2014},\n\tauthor = {Guterman, P.S. and Allison, R. S.},\n\tbooktitle = {International Multisensory Research Forum},\n\tdate-added = {2014-08-14 17:01:35 +0000},\n\tdate-modified = {2014-08-14 17:01:35 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {112},\n\ttitle = {Postural effects onthe perceived tilt of a line and global motion},\n\tyear = {2014}}\n\n
\n
\n\n\n
\n Aubert's (1861, Arch Pathol Anat, 20: 381-393) finding that a vertical line is perceived as tilted in tilted observers demonstrated how percepts of verticality rely on the integration of multiple sensory systems. This phenomenon has been studied extensively using static stimuli. Global motion processing may play an important role in sensory integration, so here we follow up on our earlier report (VSS 2013) and explored whether this tilt occurs when viewing global motion displays. Observers stood and lay left side down while viewing a static line and random-dot display of 2D (planar) or 3D (volumetric) global motion. For each posture and motion type, a forced-choice staircase procedure determined the tilt of the stimulus that appeared subjectively vertical (PSE). Consistent with Aubert's A-effect and our earlier results using the method of constant stimuli, shifts were significantly greater when lying on the side than standing, and in the direction of the head tilt. In the lying position, the PSE shift was significantly smaller for the global motion stimuli (95%CI: 2D = -11.49 +/- 5.86 deg., 3D = -17.08 +/- 4.77 deg.) than the line (95%CI: -23 +/- 4.76 deg.). A control experiment using single and multiple line displays eliminated eccentricity and density as potential explanations for these differences. We will discuss these findings in terms of their implications for sensory integration and mapping of spatial reference frames.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Challenges Related to Nonhuman Animal-Computer Interaction: Usability and `Liking'.\n \n \n \n \n\n\n \n Ritvo, S. E., & Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the 2014 Workshops on Advances in Computer Entertainment Conference: The First International Congress on Animal Human Computer Interaction, pages Article 4, 11 2014. ACM\n \n\n\n\n
\n\n\n\n \n \n \"ChallengesPaper\n  \n \n \n \"Challenges-1\n  \n \n \n \"Challenges-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Ritvo:2014bh,\n\tabstract = {Despite a marked increase in the number of hardware and software systems being adapted and designed specifically for nonhuman animals, to-date, nearly all computer interaction design and assessment has been anthropocentric. Ironically, because nonhuman animals cannot provide, refuse, or withdraw consent to participate with ACI systems, valid and reliable evaluation of usability and user satisfaction is crucial. The current paper explores a) the potential benefits and costs of engaging in animal-computer interaction for nonhuman animal users, b) potential animal-computer interaction evaluation concerns, and c) the assessment of liking and preference in non-communicative subjects.},\n\tannote = {Nov 14, 2014 Funchal, Madeira},\n\tauthor = {Ritvo, S. E. and Allison, R. S.},\n\tbooktitle = {Proceedings of the 2014 Workshops on Advances in Computer Entertainment Conference: The First International Congress on Animal Human Computer Interaction},\n\tdate-added = {2014-10-19 02:40:19 +0000},\n\tdate-modified = {2015-10-02 20:35:10 +0000},\n\tdoi = {10.1145/2693787.2693795},\n\tkeywords = {Misc.},\n\tmonth = {11},\n\torganization = {{ACM}},\n\tpages = {Article 4},\n\ttitle = {Challenges Related to Nonhuman Animal-Computer Interaction: Usability and `Liking'},\n\turl = {http://percept.eecs.yorku.ca/papers/AHCI-Paper-14-11-04.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/AHCI-Paper-14-11-04.pdf},\n\tyear = {2014},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/AHCI-Paper-14-11-04.pdf},\n\turl-2 = {https://doi.org/10.1145/2693787.2693795}}\n\n
\n
\n\n\n
\n Despite a marked increase in the number of hardware and software systems being adapted and designed specifically for nonhuman animals, to-date, nearly all computer interaction design and assessment has been anthropocentric. Ironically, because nonhuman animals cannot provide, refuse, or withdraw consent to participate with ACI systems, valid and reliable evaluation of usability and user satisfaction is crucial. The current paper explores a) the potential benefits and costs of engaging in animal-computer interaction for nonhuman animal users, b) potential animal-computer interaction evaluation concerns, and c) the assessment of liking and preference in non-communicative subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaze-Contingent Depth of Field in Realistic Scenes: The User Experience.\n \n \n \n \n\n\n \n Vinnikov, M., & Allison, R. S.\n\n\n \n\n\n\n In ACM Eye Tracking Research and Applications 2014, pages 119-126, 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Gaze-ContingentPaper\n  \n \n \n \"Gaze-Contingent-1\n  \n \n \n \"Gaze-Contingent-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Vinnikov:2014fk,\n\tabstract = {Computer-generated objects presented on a display typically have the same focal distance regardless of the monocular and binocular depth cues used to portray a 3D scene. This is because they are presented on a flat screen display that has a fixed physical location. In a stereoscopic 3D display, accommodation (focus) of the eyes should always be at the distance of the screen for clear vision regardless of the depth portrayed; this fixed accommodation conflicts with vergence eye movements that the user must make to fuse stimuli located off the screen. This is known as accommodation-vergence conflict and is detrimental for user experience of stereoscopic virtual environments (VE), as it can cause visual discomfort and diplopia during use of a stereoscopic display. It is believed that, by artificially simulating focal blur and natural accommodation, it is possible to compensate for the vergence-accommodation conflict and alleviate these symptoms. We hypothesized that it is possible to compensate for conflict with a fixed accommodation cue by adding simulated focal blur according to instantaneous fixation.\n\nWe examined gaze-contingent depth of field (DOF) when used in stereoscopic and non-stereoscopic 3D displays. We asked our participants to compare different conditions in terms of depth perception, image quality and viewing comfort. As expected, we found that monocular DOF gave a stronger impression of depth than no depth of field, stereoscopic cues were stronger than any kind of monocular cues, but adding depth of field to stereo displays did not enhance depth impressions. The opposite was true for image comfort. People thought that DOF impaired image quality in monocular viewing. We also observed that comfort was affected by DOF and display mode in similar fashion as image quality. However, the magnitude of the effects of DOF simulation on image quality depended on whether people associated image quality with depth or not. These results suggest that studies evaluating DOF effectiveness need to consider the type of task, type of image and questions asked.\n},\n\tannote = {Safety Harbor, FL, USA. Mar 26-28, 2014},\n\tauthor = {Vinnikov, M. and Allison, R. S.},\n\tbooktitle = {{ACM} Eye Tracking Research and Applications 2014},\n\tdate-added = {2013-11-11 19:30:47 +0000},\n\tdate-modified = {2014-09-26 02:21:19 +0000},\n\tdoi = {10.1145/2578153.2578170},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {119-126},\n\ttitle = {Gaze-Contingent Depth of Field in Realistic Scenes: The User Experience},\n\turl = {https://percept.eecs.yorku.ca/papers/119-vinnikov.pdf},\n\turl-1 = {http://dx.doi.org/10.1145/2578153.2578170},\n\tyear = {2014},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/119-vinnikov.pdf},\n\turl-2 = {https://doi.org/10.1145/2578153.2578170}}\n\n
\n
\n\n\n
\n Computer-generated objects presented on a display typically have the same focal distance regardless of the monocular and binocular depth cues used to portray a 3D scene. This is because they are presented on a flat screen display that has a fixed physical location. In a stereoscopic 3D display, accommodation (focus) of the eyes should always be at the distance of the screen for clear vision regardless of the depth portrayed; this fixed accommodation conflicts with vergence eye movements that the user must make to fuse stimuli located off the screen. This is known as accommodation-vergence conflict and is detrimental for user experience of stereoscopic virtual environments (VE), as it can cause visual discomfort and diplopia during use of a stereoscopic display. It is believed that, by artificially simulating focal blur and natural accommodation, it is possible to compensate for the vergence-accommodation conflict and alleviate these symptoms. We hypothesized that it is possible to compensate for conflict with a fixed accommodation cue by adding simulated focal blur according to instantaneous fixation. We examined gaze-contingent depth of field (DOF) when used in stereoscopic and non-stereoscopic 3D displays. We asked our participants to compare different conditions in terms of depth perception, image quality and viewing comfort. As expected, we found that monocular DOF gave a stronger impression of depth than no depth of field, stereoscopic cues were stronger than any kind of monocular cues, but adding depth of field to stereo displays did not enhance depth impressions. The opposite was true for image comfort. People thought that DOF impaired image quality in monocular viewing. We also observed that comfort was affected by DOF and display mode in similar fashion as image quality. However, the magnitude of the effects of DOF simulation on image quality depended on whether people associated image quality with depth or not. These results suggest that studies evaluating DOF effectiveness need to consider the type of task, type of image and questions asked. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The dichoptiscope: An instrument for investigating cues to motion in depth.\n \n \n \n \n\n\n \n Howard, I. P., Fukuda, K., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 13(14: Article 1): 1-11. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Howard:fk,\n\tabstract = {A stereoscope displays 2-D images with binocular disparities (stereograms), which fuse to form a 3-D stereoscopic object. But a stereoscopic object creates a conflict between vergence and accommodation. Also, motion in depth of a stereoscopic object simulated solely from change in target vergence produces anomalous motion parallax and anomalous changes in perspective. We describe a new instrument, which overcomes these problems. We call it the dichoptiscope. It resembles a mirror stereoscope, but instead of stereograms, it displays identical 2-D or 3-D physical objects to each eye. When a pair of the physical, monocular objects is fused, they create a dichoptic object that is visually identical to a real object. There is no conflict between vergence and accommodation, and motion parallax is normal. When the monocular objects move in real depth, the dichoptic object also moves in depth. The instrument allows the experimenter to control independently each of several cues to motion in depth. These cues include changes in the size of the images, changes in the vergence of the eyes, changes in binocular disparity within the moving object, and changes in the relative disparity between the moving object and a stationary object.},\n\tauthor = {Howard, I. P. and Fukuda, K. and Allison, R. S.},\n\tdate-added = {2013-10-19 15:39:29 +0000},\n\tdate-modified = {2014-09-26 02:17:29 +0000},\n\tdoi = {10.1167/13.14.1},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis, Motion in depth},\n\tnumber = {14: Article 1},\n\tpages = {1-11},\n\ttitle = {The dichoptiscope: An instrument for investigating cues to motion in depth},\n\turl = {http://ww.journalofvision.org/content/13/14/1.full},\n\tvolume = {13},\n\tyear = {2013},\n\turl-1 = {http://ww.journalofvision.org/content/13/14/1.full},\n\turl-2 = {https://doi.org/10.1167/13.14.1}}\n\n
\n
\n\n\n
\n A stereoscope displays 2-D images with binocular disparities (stereograms), which fuse to form a 3-D stereoscopic object. But a stereoscopic object creates a conflict between vergence and accommodation. Also, motion in depth of a stereoscopic object simulated solely from change in target vergence produces anomalous motion parallax and anomalous changes in perspective. We describe a new instrument, which overcomes these problems. We call it the dichoptiscope. It resembles a mirror stereoscope, but instead of stereograms, it displays identical 2-D or 3-D physical objects to each eye. When a pair of the physical, monocular objects is fused, they create a dichoptic object that is visually identical to a real object. There is no conflict between vergence and accommodation, and motion parallax is normal. When the monocular objects move in real depth, the dichoptic object also moves in depth. The instrument allows the experimenter to control independently each of several cues to motion in depth. These cues include changes in the size of the images, changes in the vergence of the eyes, changes in binocular disparity within the moving object, and changes in the relative disparity between the moving object and a stationary object.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection and Discrimination of Motion-Defined Form in the Presence of Additive Noise: Implications for Motion Processing and Use of Night Vision Devices.\n \n \n \n \n\n\n \n Allison, R. S., Macuda, T., & Jennings, S.\n\n\n \n\n\n\n IEEE Transactions on Human Machine Systems, 43(6): 558-569. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n \n \"Detection-1\n  \n \n \n \"Detection-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:uq,\n\tabstract = {Superimposed luminance noise is typical of imagery from devices used for low-light vision such as image intensifiers (i.e., night vision devices). In four experiments, we measured the ability to detect and discriminate motion-defined forms as a function of stimulus signal-to-noise ratio at a variety of stimulus speeds. For each trial, observers were shown a pair of image sequences - one containing dots in a central motion-defined target region that moves coherently against the surrounding dots, which moved in the opposite or in random directions, while the other sequence had the same random/uniform motion in both the center and surrounding parts. They indicated which interval contained the target stimulus in a two-interval forced-choice procedure. In the first experiment, simulated night vision images were presented with Poisson-distributed spatiotemporal image noise added to both the target and surrounding regions of the display. As the power of spatiotemporal noise was increased, it became harder for observers to detect the target, particularly at the lowest and highest dot speeds. The second experiment confirmed that these effects also occurred with low illumination in real night vision device imagery, a situation that produces similar image noise. The third experiment demonstrated that these effects generalized to Gaussian noise distributions and noise created by spatiotemporal decorrelation. In the fourth experiment, we found similar speed-dependent effects of luminance noise for the discrimination (as opposed to detection) of the shape of a motion-defined form. The results are discussed in terms of physiological motion processing and for the usability of enhanced vision displays under noisy conditions.},\n\tauthor = {Allison, R. S. and Macuda, T. and Jennings, S.},\n\tdate-added = {2013-06-12 23:21:15 +0000},\n\tdate-modified = {2014-09-26 01:00:52 +0000},\n\tdoi = {10.1109/THMS.2013.2284911},\n\tjournal = {IEEE Transactions on Human Machine Systems},\n\tkeywords = {Night Vision},\n\tnumber = {6},\n\tpages = {558-569},\n\ttitle = {Detection and Discrimination of Motion-Defined Form in the Presence of Additive Noise: Implications for Motion Processing and Use of Night Vision Devices},\n\turl = {http://percept.eecs.yorku.ca/papers/06645415.pdf},\n\turl-1 = {http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6645415},\n\turl-2 = {http://dx.doi.org/10.1109/THMS.2013.2284911},\n\tvolume = {43},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/06645415.pdf},\n\turl-2 = {https://doi.org/10.1109/THMS.2013.2284911}}\n\n
\n
\n\n\n
\n Superimposed luminance noise is typical of imagery from devices used for low-light vision such as image intensifiers (i.e., night vision devices). In four experiments, we measured the ability to detect and discriminate motion-defined forms as a function of stimulus signal-to-noise ratio at a variety of stimulus speeds. For each trial, observers were shown a pair of image sequences - one containing dots in a central motion-defined target region that moves coherently against the surrounding dots, which moved in the opposite or in random directions, while the other sequence had the same random/uniform motion in both the center and surrounding parts. They indicated which interval contained the target stimulus in a two-interval forced-choice procedure. In the first experiment, simulated night vision images were presented with Poisson-distributed spatiotemporal image noise added to both the target and surrounding regions of the display. As the power of spatiotemporal noise was increased, it became harder for observers to detect the target, particularly at the lowest and highest dot speeds. The second experiment confirmed that these effects also occurred with low illumination in real night vision device imagery, a situation that produces similar image noise. The third experiment demonstrated that these effects generalized to Gaussian noise distributions and noise created by spatiotemporal decorrelation. In the fourth experiment, we found similar speed-dependent effects of luminance noise for the discrimination (as opposed to detection) of the shape of a motion-defined form. The results are discussed in terms of physiological motion processing and for the usability of enhanced vision displays under noisy conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shape Perception of Thin Transparent Objects with Stereoscopic Viewing.\n \n \n \n \n\n\n \n Chen, J., & Allison, R. S.\n\n\n \n\n\n\n ACM Transactions on Applied Perception (TAP), 10(3, Article 15): 1-15. 08 2013.\n \n\n\n\n
\n\n\n\n \n \n \"ShapePaper\n  \n \n \n \"Shape-1\n  \n \n \n \"Shape-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Chen:2013uq,\n\tabstract = {Many materials, including water surfaces, jewels, and glassware exhibit transparent refractions. The human visual system can somehow recover 3D shape from refracted images. While previous research has elucidated various visual cues that can facilitate visual perception of transparent objects, most of them focused on monocular material perception. The question of shape perception of transparent objects is much more complex and few studies have been undertaken, particular in terms of binocular vision.\n\nIn this article, we first design a system for stereoscopic surface orientation estimation with photo-realistic stimuli. It displays pre-rendered stereoscopic images and a real-time S3D (Stereoscopic 3D) shape probe simultaneously. Then we estimate people's perception of the shape of thin transparent objects using a gauge figure task. Our results suggest that people can consistently perceive the surface orientation of thin transparent objects, and stereoscopic viewing improves the precision of estimates. To explain the results, we present an edge-aware orientation map based on image gradients and structure tensors to illustrate the orientation information in images. We also decomposed the normal direction of the surface into azimuth angle and slant angle to explain why additional depth information can improve the accuracy of perceived normal direction.\n},\n\tannote = {presented at ACM SAP Dublin, Sept 2013\n},\n\tauthor = {Chen, J. and Allison, R. S.},\n\tdate-added = {2013-06-12 23:20:52 +0000},\n\tdate-modified = {2014-09-26 02:17:06 +0000},\n\tdoi = {10.1145/2506206.2506208},\n\tjournal = {{ACM} Transactions on Applied Perception (TAP)},\n\tkeywords = {Stereopsis},\n\tmonth = {08},\n\tnumber = {3, Article 15},\n\tpages = {1-15},\n\ttitle = {Shape Perception of Thin Transparent Objects with Stereoscopic Viewing},\n\turl = {http://percept.eecs.yorku.ca/papers/a15-chen(1).pdf},\n\turl-1 = {http://dx.doi.org/10.1145/2506206.2506208},\n\tvolume = {10},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/a15-chen(1).pdf},\n\turl-2 = {https://doi.org/10.1145/2506206.2506208}}\n\n
\n
\n\n\n
\n Many materials, including water surfaces, jewels, and glassware exhibit transparent refractions. The human visual system can somehow recover 3D shape from refracted images. While previous research has elucidated various visual cues that can facilitate visual perception of transparent objects, most of them focused on monocular material perception. The question of shape perception of transparent objects is much more complex and few studies have been undertaken, particular in terms of binocular vision. In this article, we first design a system for stereoscopic surface orientation estimation with photo-realistic stimuli. It displays pre-rendered stereoscopic images and a real-time S3D (Stereoscopic 3D) shape probe simultaneously. Then we estimate people's perception of the shape of thin transparent objects using a gauge figure task. Our results suggest that people can consistently perceive the surface orientation of thin transparent objects, and stereoscopic viewing improves the precision of estimates. To explain the results, we present an edge-aware orientation map based on image gradients and structure tensors to illustrate the orientation information in images. We also decomposed the normal direction of the surface into azimuth angle and slant angle to explain why additional depth information can improve the accuracy of perceived normal direction. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual artefacts, suspension of disbelief and realism in stereoscopic 3D film.\n \n \n \n \n\n\n \n Allison, R. S., Wilcox, L. M., & Kazimi, A.\n\n\n \n\n\n\n Public, 47 (Parallax: Stereoscopic 3D): 149-160. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"PerceptualPaper\n  \n \n \n \"Perceptual-1\n  \n \n \n \"Perceptual-2\n  \n \n \n \"Perceptual-3\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:fk,\n\tabstract = {Stereoscopic film has long held an allure as the ultimate in fidelity for cinema and, as such, been a goal for those seeking the most compelling illusion of reality. However, the fundamental and technical limitations of the medium introduce a number of artefacts and imperfections that affect viewer experience.  The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. This paper will discuss a variety of these issues from a perceptual standpoint and their implications for depth perception, visual comfort and sense of scale.  The impact of these perceptual artefacts on the suspension of disbelief and the creation of alternate realities is discussed, as is their deliberate use when artistic considerations demand breaks with realism. \nKeywords: Stereoscopic film, perception, suspension of disbelief, stereopsis, realism\n},\n\tauthor = {Allison, R. S. and Wilcox, L. M. and Kazimi, A.},\n\tdate-added = {2013-06-01 02:29:52 +0000},\n\tdate-modified = {2014-01-12 01:12:53 +0000},\n\tdoi = {10.1386/public.24.47.149_1},\n\tjournal = {Public},\n\tkeywords = {Stereopsis},\n\tpages = {149-160},\n\ttitle = {Perceptual artefacts, suspension of disbelief and realism in stereoscopic 3D film},\n\turl = {http://www.ingentaconnect.com/content/intellect/public/2013/00000024/00000047/art00010},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Public Journal.pdf},\n\turl-2 = {http://www.ingentaconnect.com/content/intellect/public/2013/00000024/00000047/art00010},\n\turl-3 = {http://dx.doi.org/10.1386/public.24.47.149_1},\n\tvolume = {47 (Parallax: Stereoscopic 3D)},\n\tyear = {2013},\n\turl-1 = {http://www.ingentaconnect.com/content/intellect/public/2013/00000024/00000047/art00010},\n\turl-2 = {https://doi.org/10.1386/public.24.47.149_1}}\n\n
\n
\n\n\n
\n Stereoscopic film has long held an allure as the ultimate in fidelity for cinema and, as such, been a goal for those seeking the most compelling illusion of reality. However, the fundamental and technical limitations of the medium introduce a number of artefacts and imperfections that affect viewer experience. The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. This paper will discuss a variety of these issues from a perceptual standpoint and their implications for depth perception, visual comfort and sense of scale. The impact of these perceptual artefacts on the suspension of disbelief and the creation of alternate realities is discussed, as is their deliberate use when artistic considerations demand breaks with realism. Keywords: Stereoscopic film, perception, suspension of disbelief, stereopsis, realism \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vection In Depth During Treadmill Walking.\n \n \n \n \n\n\n \n Ash, A., Palmisano, S., Apthorp, D., & Allison, R. S.\n\n\n \n\n\n\n Perception, 42: 562 – 576. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"VectionPaper\n  \n \n \n \"Vection-1\n  \n \n \n \"Vection-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Ash:2013fk,\n\tabstract = {Vection has typically been induced in stationary observers (ie conditions providing visual-only information about self-motion). Two recent studies have examined vection during active treadmill walking---one reported that treadmill walking in the same direction as the visually simulated self-motion impaired vection (Onimaru et al, 2010 Journal of Vision 10(7):860), the other reported that it enhanced vection (Seno et al, 2011 Perception 40 747--750; Seno et al, 2011 Attention, Perception, & Psychophysics 73 1467--1476). Our study expands on these earlier investigations of vection during observer active movement. In experiment 1 we presented radially expanding optic flow and compared the vection produced in stationary observers with that produced during walking forward on a treadmill at a `matched' speed. Experiment 2 compared the vection induced by forward treadmill walking while viewing expanding or contracting optic flow with that induced by viewing playbacks of these same displays while stationary. In both experiments subjects' tracked head movements were either incorporated into the self-motion displays (as simulated viewpoint jitter) or simply ignored. We found that treadmill walking always reduced vection (compared with stationary viewing conditions) and that simulated viewpoint jitter always increased vection (compared with constant velocity displays). These findings suggest that while consistent visual--vestibular information about self-acceleration increases vection, biomechanical self-motion information reduces this experience (irrespective of whether it is consistent or not with the visual input). },\n\tauthor = {April Ash and Stephen Palmisano and Deborah Apthorp and Robert S. Allison},\n\tdate-added = {2013-05-02 10:56:15 +0000},\n\tdate-modified = {2014-09-26 02:17:48 +0000},\n\tdoi = {10.1068/p7449},\n\tjournal = {Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {562 -- 576},\n\ttitle = {Vection In Depth During Treadmill Walking},\n\turl = {http://percept.eecs.yorku.ca/papers/ash-treadmill.pdf},\n\turl-1 = {http://dx.doi.org/10.1068/p7449},\n\tvolume = {42},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/ash-treadmill.pdf},\n\turl-2 = {https://doi.org/10.1068/p7449}}\n\n
\n
\n\n\n
\n Vection has typically been induced in stationary observers (ie conditions providing visual-only information about self-motion). Two recent studies have examined vection during active treadmill walking—one reported that treadmill walking in the same direction as the visually simulated self-motion impaired vection (Onimaru et al, 2010 Journal of Vision 10(7):860), the other reported that it enhanced vection (Seno et al, 2011 Perception 40 747–750; Seno et al, 2011 Attention, Perception, & Psychophysics 73 1467–1476). Our study expands on these earlier investigations of vection during observer active movement. In experiment 1 we presented radially expanding optic flow and compared the vection produced in stationary observers with that produced during walking forward on a treadmill at a `matched' speed. Experiment 2 compared the vection induced by forward treadmill walking while viewing expanding or contracting optic flow with that induced by viewing playbacks of these same displays while stationary. In both experiments subjects' tracked head movements were either incorporated into the self-motion displays (as simulated viewpoint jitter) or simply ignored. We found that treadmill walking always reduced vection (compared with stationary viewing conditions) and that simulated viewpoint jitter always increased vection (compared with constant velocity displays). These findings suggest that while consistent visual–vestibular information about self-acceleration increases vection, biomechanical self-motion information reduces this experience (irrespective of whether it is consistent or not with the visual input). \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Biologically-inspired heuristics for human-like walking trajectories toward targets and around obstacles.\n \n \n \n \n\n\n \n Rushton, S., & Allison, R.\n\n\n \n\n\n\n Displays, 34(2): 105-113. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Biologically-inspiredPaper\n  \n \n \n \"Biologically-inspired-1\n  \n \n \n \"Biologically-inspired-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Rushton:2020fj,\n\tabstract = {We describe simple heuristics, based on perceptual variables, that produce human-like trajectories towards moving and stationary targets, and around moving and stationary obstacles. Interception of moving and stationary objects can be achieved through regulation of self-movement to maintain a target at a constant eccentricity, or by cancelling the change (drift) in the eccentricity of the target. We first show how a constant eccentricity strategy can be extended to home in on optimal paths and avoid obstacles. We then identify a simple visual speed ratio that signals a future collision, and the change in path needed for avoidance. The combination of heuristics based on eccentricity and the speed-ratio produces human-like behaviour. The heuristics can be used to animate avatars in virtual environments or to guide mobile robots. Combined with higher-level goal setting and way-finding behaviours, such navigation heuristics could provide the foundation for generative models of natural human locomotion},\n\tauthor = {Rushton, S.K. and Allison, R.S.},\n\tdate-added = {2012-10-13 00:14:48 +0000},\n\tdate-modified = {2014-09-26 01:20:57 +0000},\n\tdoi = {10.1016/j.displa.2012.10.006},\n\tjournal = {Displays},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\tpages = {105-113},\n\ttitle = {Biologically-inspired heuristics for human-like walking trajectories toward targets and around obstacles},\n\turl = {http://percept.eecs.yorku.ca/papers/displays.pdf},\n\turl-1 = {http://dx.doi.org/10.1016/j.displa.2012.10.006},\n\tvolume = {34},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/displays.pdf},\n\turl-2 = {https://doi.org/10.1016/j.displa.2012.10.006}}\n\n
\n
\n\n\n
\n We describe simple heuristics, based on perceptual variables, that produce human-like trajectories towards moving and stationary targets, and around moving and stationary obstacles. Interception of moving and stationary objects can be achieved through regulation of self-movement to maintain a target at a constant eccentricity, or by cancelling the change (drift) in the eccentricity of the target. We first show how a constant eccentricity strategy can be extended to home in on optimal paths and avoid obstacles. We then identify a simple visual speed ratio that signals a future collision, and the change in path needed for avoidance. The combination of heuristics based on eccentricity and the speed-ratio produces human-like behaviour. The heuristics can be used to animate avatars in virtual environments or to guide mobile robots. Combined with higher-level goal setting and way-finding behaviours, such navigation heuristics could provide the foundation for generative models of natural human locomotion\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of head orientation on the perceived tilt of a line and global motion.\n \n \n \n\n\n \n Guterman, P., Allison, R. S., & Zacher, J. E.\n\n\n \n\n\n\n In CVR 2013: Interactions in Vision. 2013.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2013uq,\n\tauthor = {Guterman, P.S. and Allison, R. S. and Zacher, J. E.},\n\tbooktitle = {CVR 2013: Interactions in Vision},\n\tdate-added = {2013-10-04 11:00:04 +0000},\n\tdate-modified = {2013-10-04 11:00:04 +0000},\n\thowpublished = {CVR Wed June 26 -- Fri 28 June, 2013},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Effects of head orientation on the perceived tilt of a line and global motion.},\n\tyear = {2013}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Steering with Simulated Symptoms of Age-related Macular Degeneration.\n \n \n \n\n\n \n Vinnikov, M., Palmisano, S., & Allison, R. S.\n\n\n \n\n\n\n In The Eye, the Brain and the Auto, Detroit Michigan Sept 2013. 2013.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:2013fk,\n\tabstract = {Purpose: Age-related macular degeneration AMD is a leading cause of blindness in ageing populations across the world. The symptoms associated with later stages of the disease are sizeable blind spots (scotomas) in the central visual field, which significantly impact all aspects of everyday life (including driving and navigation). In contrast, the early symptoms of the disease include gradual reduction in acuity and visual distortion in the affected areas, also known as metamorphopsia. There is limited research on the functional consequences of symptoms in the early stages of the disease.\n\nMethods: We examined the effects of the following macular degeneration symptoms on gaze behavior and steering performance: (i) horizontal distortions, (ii) Gaussian (both horizontal and vertical) distortions and (iii) central scotomas (iv) unimpaired vision condition.  To ensure repeatability, we studied healthy participants and used a gaze-contingent display paradigm to simulate these visual deficiencies in real time.  Driving was simulated at different speeds on two-lane curving rural roads with various layouts. \n\nResults: We predicted that gaze and driving performance would be more similar for the visual distortion and scotoma conditions than for conditions with no simulated visual deficiency.  As expected, several deficits in driver performance were observed during simulated macular degeneration conditions.  While gaze was reliably directed to nearer scene features during the Gaussian distortion and scotoma trials (compared to unimpaired trials), variability in lateral gaze did not differ (suggesting that information from the peripheral visual field was used to compensate for information that would have normally been available from the central visual field). Based on past findings, we also expected people to direct their gaze more towards the inner side of the curve. However, on a significant number of turns, we observed that people often preferred to look at outer curve instead (e.g. on average, this occurred about 5\\% more often in macular degeneration trials than in the unimpaired trials).\n\nConclusions: Simulated symptoms of early stage Macular Degeneration impacted steering and gaze behaviour. We are currently looking at gaze pattern signatures for each condition and correlating these with driving performance. In our future research we would like to examine collision avoidance strategies associated with different stages of the disease.  \n\nFunding Sources: Endeavour Fellowship (Australia); Province of Ontario ORF/RE (CIV/DDD)\n\nBiography: Margarita Vinnikov is currently a PhD candidate in the Department of Electrical Engineering and Computer Science, York University, Toronto. She works in the Virtual Reality and Perception Laboratory under supervision of Dr. Robert S. Allison. In 2006, she completed an Honours B.Sc. Specialized in Computer Science and in 2009, she completed M.Sc. Computer Science. Her research interest is in gaze-contingent real-time simulations of impaired vision.\n},\n\tannote = {Dearborn MI\n Sept 16-18, 2013},\n\tauthor = {Vinnikov, M. and Palmisano, S.A. and Allison, R. S.},\n\tbooktitle = {The Eye, the Brain and the Auto, Detroit Michigan Sept 2013},\n\tdate-added = {2013-10-04 10:59:33 +0000},\n\tdate-modified = {2013-10-09 14:06:56 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Steering with Simulated Symptoms of Age-related Macular Degeneration},\n\tyear = {2013}}\n\n
\n
\n\n\n
\n Purpose: Age-related macular degeneration AMD is a leading cause of blindness in ageing populations across the world. The symptoms associated with later stages of the disease are sizeable blind spots (scotomas) in the central visual field, which significantly impact all aspects of everyday life (including driving and navigation). In contrast, the early symptoms of the disease include gradual reduction in acuity and visual distortion in the affected areas, also known as metamorphopsia. There is limited research on the functional consequences of symptoms in the early stages of the disease. Methods: We examined the effects of the following macular degeneration symptoms on gaze behavior and steering performance: (i) horizontal distortions, (ii) Gaussian (both horizontal and vertical) distortions and (iii) central scotomas (iv) unimpaired vision condition. To ensure repeatability, we studied healthy participants and used a gaze-contingent display paradigm to simulate these visual deficiencies in real time. Driving was simulated at different speeds on two-lane curving rural roads with various layouts. Results: We predicted that gaze and driving performance would be more similar for the visual distortion and scotoma conditions than for conditions with no simulated visual deficiency. As expected, several deficits in driver performance were observed during simulated macular degeneration conditions. While gaze was reliably directed to nearer scene features during the Gaussian distortion and scotoma trials (compared to unimpaired trials), variability in lateral gaze did not differ (suggesting that information from the peripheral visual field was used to compensate for information that would have normally been available from the central visual field). Based on past findings, we also expected people to direct their gaze more towards the inner side of the curve. However, on a significant number of turns, we observed that people often preferred to look at outer curve instead (e.g. on average, this occurred about 5% more often in macular degeneration trials than in the unimpaired trials). Conclusions: Simulated symptoms of early stage Macular Degeneration impacted steering and gaze behaviour. We are currently looking at gaze pattern signatures for each condition and correlating these with driving performance. In our future research we would like to examine collision avoidance strategies associated with different stages of the disease. Funding Sources: Endeavour Fellowship (Australia); Province of Ontario ORF/RE (CIV/DDD) Biography: Margarita Vinnikov is currently a PhD candidate in the Department of Electrical Engineering and Computer Science, York University, Toronto. She works in the Virtual Reality and Perception Laboratory under supervision of Dr. Robert S. Allison. In 2006, she completed an Honours B.Sc. Specialized in Computer Science and in 2009, she completed M.Sc. Computer Science. Her research interest is in gaze-contingent real-time simulations of impaired vision. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of head orientation on the perceived tilt of a static line and 3D global motion.\n \n \n \n \n\n\n \n Guterman, P. S., Allison, R. S., & Zacher, J. E.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 13, pages 874–874. 07 2013.\n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n \n \"Effects-1\n  \n \n \n \"Effects-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{guterman_effects_2013,\n\tabstract = {When the head is tilted an objectively vertical (or horizontal) line is typically perceived as tilted. We explored whether this shift occurs when viewing {3D} global motion displays. Global motion is processed, in part, in cortical area {MST}, which is believed to be involved in multisensory integration and may facilitate the mapping of spatial reference frames. Thus, we hypothesized that observers may be less susceptible to these biases for global motion compared to line displays. Observers stood, and lay left and right side down, while viewing a static line or random-dot {3D} global motion display. The line and motion direction were tilted $0^{\\circ}$, $\\pm 5^{\\circ}$, $\\pm 10^{\\circ}$, $\\pm 15^{\\circ}$, $\\pm 20^{\\circ}$, and $\\pm 25 ^{\\circ}$ from the gravitational vertical, and in a separate block tilted from the horizontal. After each trial, observers indicated whether the tilt was clockwise or counterclockwise from the perceived vertical or horizontal with a button press. Psychometric functions were fit to the data and shifts in the point of subjective equality ({PSE)} were measured. These shifts were greater when lying on the side than standing. These shifts were biased in the direction of the head tilt, consistent with the so-called A-effect. However, contrary to an earlier study by De Vrijer, Medendorp, and Van Gisbergen (2008, J Neurophysiol, 99: 915--930) that found similar {PSE} shifts for lines and {2D} planar motion, we found significantly larger shifts for the static line than {3D} global motion. There was no appreciable difference between the shift magnitude in the tilt-from-vertical and horizontal conditions. Furthermore, the direction of motion (up/down, left/right) had no significant influence on the {PSE.} The results will be discussed in terms of the sensory integration of motion information in cortical areas.\nMeeting abstract presented at {VSS} 2013},\n\tauthor = {Guterman, Pearl S. and Allison, Robert S. and Zacher, James E.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2013-07-26 12:34:31 +0000},\n\tdate-modified = {2019-02-03 09:34:05 -0500},\n\tdoi = {10.1167/13.9.874},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/3M5QDWXD/874.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tlanguage = {en},\n\tmonth = 07,\n\tnumber = {9},\n\tpages = {874--874},\n\ttitle = {Effects of head orientation on the perceived tilt of a static line and {3D} global motion.},\n\turl = {http://www.journalofvision.org/content/13/9/874},\n\turl-1 = {http://www.journalofvision.org/content/13/9/874},\n\turl-2 = {http://dx.doi.org/10.1167/13.9.874},\n\turldate = {2013-07-26},\n\tvolume = {13},\n\tyear = {2013},\n\turl-1 = {http://www.journalofvision.org/content/13/9/874},\n\turl-2 = {https://doi.org/10.1167/13.9.874}}\n\n
\n
\n\n\n
\n When the head is tilted an objectively vertical (or horizontal) line is typically perceived as tilted. We explored whether this shift occurs when viewing 3D global motion displays. Global motion is processed, in part, in cortical area MST, which is believed to be involved in multisensory integration and may facilitate the mapping of spatial reference frames. Thus, we hypothesized that observers may be less susceptible to these biases for global motion compared to line displays. Observers stood, and lay left and right side down, while viewing a static line or random-dot 3D global motion display. The line and motion direction were tilted $0^{∘}$, $± 5^{∘}$, $± 10^{∘}$, $± 15^{∘}$, $± 20^{∘}$, and $± 25 ^{∘}$ from the gravitational vertical, and in a separate block tilted from the horizontal. After each trial, observers indicated whether the tilt was clockwise or counterclockwise from the perceived vertical or horizontal with a button press. Psychometric functions were fit to the data and shifts in the point of subjective equality (PSE) were measured. These shifts were greater when lying on the side than standing. These shifts were biased in the direction of the head tilt, consistent with the so-called A-effect. However, contrary to an earlier study by De Vrijer, Medendorp, and Van Gisbergen (2008, J Neurophysiol, 99: 915–930) that found similar PSE shifts for lines and 2D planar motion, we found significantly larger shifts for the static line than 3D global motion. There was no appreciable difference between the shift magnitude in the tilt-from-vertical and horizontal conditions. Furthermore, the direction of motion (up/down, left/right) had no significant influence on the PSE. The results will be discussed in terms of the sensory integration of motion information in cortical areas. Meeting abstract presented at VSS 2013\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining occlusion and disparity information: a computational model of stereoscopic depth perception.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 13, pages 1177–1177. 07 2013.\n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n \n \"Combining-1\n  \n \n \n \"Combining-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{tsirlin_combining_2013,\n\tabstract = {The three-dimensional structure of the world can be reconstructed using the differences, or binocular disparities, between the positions and appearance of the images of the same objects on the two retinae. Occlusion of one object by another gives rise to areas visible only in one eye, called monocular occlusions, for which binocular disparities cannot be computed. Nevertheless, monocular occlusions can be perceived at precise locations in depth and can even induce the perception of illusory occluding surfaces. Since a growing body of literature has shown that monocular occlusions are an integral part of stereoscopic depth perception, it is important that we understand the mechanisms of depth extraction from monocular occlusions. Psychophysical experiments suggest that the visual system is able to assign depth from monocularly occluded areas based on the constraints imposed by the viewing geometry. However, none of the existing models of stereopsis use viewing geometry as the primary mechanism for quantitative and qualitative depth extraction in occluded areas. Moreover, no model has been shown to recover depth and structure of illusory occluding surfaces induced by the presence of monocular regions. We propose a new model of depth perception from disparity and monocular occlusions in which monocularly occluded areas are detected explicitly and quantitative depth from occlusions is calculated based on occlusion geometry. The model represents several levels of processing in the visual cortex and includes complex interactions between disparity and monocular occlusion detectors. It successfully reconstructs depth in a large range of stimuli including random-dot stereograms, illusory occluder stimuli, da Vinci arrangements and natural images. Thus we demonstrate that a dedicated set of mechanisms for processing of monocular occlusions combined with classical disparity detectors can underpin a wide range of stereoscopic percepts.\nMeeting abstract presented at {VSS} 2013},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie M. and Allison, Robert S.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2013-07-26 12:10:04 +0000},\n\tdate-modified = {2013-07-26 12:10:45 +0000},\n\tdoi = {10.1167/13.9.1177},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/UD9DBGPZ/1177.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tlanguage = {en},\n\tmonth = 07,\n\tnumber = {9},\n\tpages = {1177--1177},\n\tshorttitle = {Combining occlusion and disparity information},\n\ttitle = {Combining occlusion and disparity information: a computational model of stereoscopic depth perception.},\n\turl = {http://www.journalofvision.org/content/13/9/1177},\n\turl-1 = {http://www.journalofvision.org/content/13/9/1177},\n\turl-2 = {http://dx.doi.org/10.1167/13.9.1177},\n\turldate = {2013-07-26},\n\tvolume = {13},\n\tyear = {2013},\n\turl-1 = {http://www.journalofvision.org/content/13/9/1177},\n\turl-2 = {https://doi.org/10.1167/13.9.1177}}\n\n
\n
\n\n\n
\n The three-dimensional structure of the world can be reconstructed using the differences, or binocular disparities, between the positions and appearance of the images of the same objects on the two retinae. Occlusion of one object by another gives rise to areas visible only in one eye, called monocular occlusions, for which binocular disparities cannot be computed. Nevertheless, monocular occlusions can be perceived at precise locations in depth and can even induce the perception of illusory occluding surfaces. Since a growing body of literature has shown that monocular occlusions are an integral part of stereoscopic depth perception, it is important that we understand the mechanisms of depth extraction from monocular occlusions. Psychophysical experiments suggest that the visual system is able to assign depth from monocularly occluded areas based on the constraints imposed by the viewing geometry. However, none of the existing models of stereopsis use viewing geometry as the primary mechanism for quantitative and qualitative depth extraction in occluded areas. Moreover, no model has been shown to recover depth and structure of illusory occluding surfaces induced by the presence of monocular regions. We propose a new model of depth perception from disparity and monocular occlusions in which monocularly occluded areas are detected explicitly and quantitative depth from occlusions is calculated based on occlusion geometry. The model represents several levels of processing in the visual cortex and includes complex interactions between disparity and monocular occlusion detectors. It successfully reconstructs depth in a large range of stimuli including random-dot stereograms, illusory occluder stimuli, da Vinci arrangements and natural images. Thus we demonstrate that a dedicated set of mechanisms for processing of monocular occlusions combined with classical disparity detectors can underpin a wide range of stereoscopic percepts. Meeting abstract presented at VSS 2013\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Are we blind to three-dimensional acceleration?.\n \n \n \n \n\n\n \n Lugtigheid, A. J., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 13, pages 970–970. 07 2013.\n \n\n\n\n
\n\n\n\n \n \n \"ArePaper\n  \n \n \n \"Are-1\n  \n \n \n \"Are-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@incollection{lugtigheid_are_2013,\n\tabstract = {Accurate information about three-dimensional ({3D)} motion is essential for interception. Being able to detect changes in the speed of motion is potentially important, as approaching objects are unlikely to maintain constant velocity either by intent, or because of the force of gravity or friction. However, evidence from the interception literature shows that acceleration is not taken into account when judging time-to-contact from looming (i.e. retinal expansion). These data may reflect a curious insensitivity to {3D} acceleration, a possibility that has received little empirical attention. As a first step towards a better understanding of this apparent lack of sensitivity, we assessed discrimination thresholds for {3D} velocity changes. Observers viewed animations of an approaching object undergoing an increase (acceleration) or decrease (deceleration) in its simulated approach speed over the trial. The stimulus was a thin outline disk that was viewed monocularly, such that looming was the only available cue to motion in depth. On each trial, observers discriminated acceleration sign. We measured psychometric functions for three interleaved average speeds. To discourage observers from using non-relevant cues (e.g. due to regularities in the stimulus and correlations between variables) we randomized the simulated starting and ending distance. Our results show that observers were able to detect acceleration in depth, but their thresholds were very high (about a 25-33\\% velocity change). While precision did not depend on average velocity, there was a velocity-dependent bias: observers were more likely to report that the object accelerated for higher average approach speeds and vice versa. Thus, observers were sensitive to the acceleration of an approaching object under minimal cue conditions, but they could not completely dissociate speed and acceleration. We will discuss which signals could support monocular discrimination of {3D} acceleration and produce the bias we found. Furthermore, we will extend these experiments to consider stereoscopic {3D} acceleration.\nMeeting abstract presented at {VSS} 2013},\n\tauthor = {Lugtigheid, Arthur J. and Allison, Robert S. and Wilcox, Laurie M.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2013-07-26 12:10:04 +0000},\n\tdate-modified = {2013-12-13 01:29:27 +0000},\n\tdoi = {10.1167/13.9.970},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/XMDHDG42/970.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis, Motion in depth},\n\tlanguage = {en},\n\tmonth = 07,\n\tnumber = {9},\n\tpages = {970--970},\n\ttitle = {Are we blind to three-dimensional acceleration?},\n\turl = {http://www.journalofvision.org/content/13/9/970},\n\turl-1 = {http://www.journalofvision.org/content/13/9/970},\n\turl-2 = {http://dx.doi.org/10.1167/13.9.970},\n\turldate = {2013-07-26},\n\tvolume = {13},\n\tyear = {2013},\n\turl-1 = {http://www.journalofvision.org/content/13/9/970},\n\turl-2 = {https://doi.org/10.1167/13.9.970}}\n\n
\n
\n\n\n
\n Accurate information about three-dimensional (3D) motion is essential for interception. Being able to detect changes in the speed of motion is potentially important, as approaching objects are unlikely to maintain constant velocity either by intent, or because of the force of gravity or friction. However, evidence from the interception literature shows that acceleration is not taken into account when judging time-to-contact from looming (i.e. retinal expansion). These data may reflect a curious insensitivity to 3D acceleration, a possibility that has received little empirical attention. As a first step towards a better understanding of this apparent lack of sensitivity, we assessed discrimination thresholds for 3D velocity changes. Observers viewed animations of an approaching object undergoing an increase (acceleration) or decrease (deceleration) in its simulated approach speed over the trial. The stimulus was a thin outline disk that was viewed monocularly, such that looming was the only available cue to motion in depth. On each trial, observers discriminated acceleration sign. We measured psychometric functions for three interleaved average speeds. To discourage observers from using non-relevant cues (e.g. due to regularities in the stimulus and correlations between variables) we randomized the simulated starting and ending distance. Our results show that observers were able to detect acceleration in depth, but their thresholds were very high (about a 25-33% velocity change). While precision did not depend on average velocity, there was a velocity-dependent bias: observers were more likely to report that the object accelerated for higher average approach speeds and vice versa. Thus, observers were sensitive to the acceleration of an approaching object under minimal cue conditions, but they could not completely dissociate speed and acceleration. We will discuss which signals could support monocular discrimination of 3D acceleration and produce the bias we found. Furthermore, we will extend these experiments to consider stereoscopic 3D acceleration. Meeting abstract presented at VSS 2013\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perception of smooth and perturbed vection in short-duration microgravity.\n \n \n \n \n\n\n \n Kirollos, R., Allison, R., Zacher, J., Guterman, P. S., & Palmisano, S.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 13, pages 702–702. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"PerceptionPaper\n  \n \n \n \"Perception-1\n  \n \n \n \"Perception-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{kirollos_perception_2013,\n\tabstract = {Adaptation to the microgravity environment and readaptation to the 1-g environment requires recalibration of the visual and vestibular signals. Previous research on the perception of visually stimulated self-motion (vection) in 1-g environments has shown that adding simulated view-point oscillation enhances the illusion of self-motion. However the role simulated oscillation plays in vection in relation to adaptation to gravity remains unclear. The goal of this experiment was to understand how simulated viewpoint oscillation can change the subjective feeling of vection in microgravity compared to 1-g. This was done by measuring participant sensation of vection before, during, and after parabolic flight. Eight participants viewed twenty-second clips displayed on a thirteen-inch laptop equipped with a hood and shroud aboard the aircraft. The clips simulated vection in the radial, oscillation or jitter motion conditions and were presented during microgravity periods of the six parabolas of a flight. Participants were asked to rate their feeling of self-motion after each clip presentation. Onset of vection and vection duration were also measured by pressing a button on a gamepad during vection. Results in microgravity showed that this oscillation effect is reduced and a small overall reduction in vection sensitivity post-flight was observed. A supplementary ground experiment demonstrated that vection did not vary significantly over multiple testing sessions and that the oscillation effect persisted as previously reported in the literature. These findings: (i) demonstrate that the oscillation advantage for vection is very stable and repeatable during 1-g conditions and (ii) imply that adaptation or conditioned responses played a role in the post-flight vection reductions. The effects observed in microgravity are discussed in terms of the ecology of terrestrial locomotion and the nature of movement in microgravity. },\n\tauthor = {Kirollos, Ramy and Allison, Robert and Zacher, James and Guterman, Pearl S. and Palmisano, Stephen},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2013-07-26 12:02:05 +0000},\n\tdate-modified = {2013-07-26 12:08:27 +0000},\n\tdoi = {10.1167/13.9.702},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/S2CPWT9A/702.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tlanguage = {en},\n\tnumber = {9},\n\tpages = {702--702},\n\ttitle = {Perception of smooth and perturbed vection in short-duration microgravity},\n\turl = {http://www.journalofvision.org/content/13/9/702},\n\turl-1 = {http://www.journalofvision.org/content/13/9/702},\n\turl-2 = {http://dx.doi.org/10.1167/13.9.702},\n\turldate = {2013-07-26},\n\tvolume = {13},\n\tyear = {2013},\n\turl-1 = {http://www.journalofvision.org/content/13/9/702},\n\turl-2 = {https://doi.org/10.1167/13.9.702}}\n\n
\n
\n\n\n
\n Adaptation to the microgravity environment and readaptation to the 1-g environment requires recalibration of the visual and vestibular signals. Previous research on the perception of visually stimulated self-motion (vection) in 1-g environments has shown that adding simulated view-point oscillation enhances the illusion of self-motion. However the role simulated oscillation plays in vection in relation to adaptation to gravity remains unclear. The goal of this experiment was to understand how simulated viewpoint oscillation can change the subjective feeling of vection in microgravity compared to 1-g. This was done by measuring participant sensation of vection before, during, and after parabolic flight. Eight participants viewed twenty-second clips displayed on a thirteen-inch laptop equipped with a hood and shroud aboard the aircraft. The clips simulated vection in the radial, oscillation or jitter motion conditions and were presented during microgravity periods of the six parabolas of a flight. Participants were asked to rate their feeling of self-motion after each clip presentation. Onset of vection and vection duration were also measured by pressing a button on a gamepad during vection. Results in microgravity showed that this oscillation effect is reduced and a small overall reduction in vection sensitivity post-flight was observed. A supplementary ground experiment demonstrated that vection did not vary significantly over multiple testing sessions and that the oscillation effect persisted as previously reported in the literature. These findings: (i) demonstrate that the oscillation advantage for vection is very stable and repeatable during 1-g conditions and (ii) imply that adaptation or conditioned responses played a role in the post-flight vection reductions. The effects observed in microgravity are discussed in terms of the ecology of terrestrial locomotion and the nature of movement in microgravity. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular contributions to linear vection.\n \n \n \n \n\n\n \n Allison, R. S., Ash, A., & Palmisano, S. A.\n\n\n \n\n\n\n In The 9th Asia-Pacific Conference on Vision (APCV 2013). PsyCh, volume 2 (Suppl 1), pages 48. 06 2013.\n \n\n\n\n
\n\n\n\n \n \n \"BinocularPaper\n  \n \n \n \"Binocular-1\n  \n \n \n \"Binocular-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2013fk,\n\tabstract = {Compelling illusions of self motion, known as vection, can be produced in a stationary observer by visual stimulation alone. The role of binocular vision and stereopsis in these illusions was explored in a series of three experiments. Linear vertical vection was produced by upward or downward translation of stereoscopic surfaces. The surfaces were horizontally-oriented depth corrugations produced by disparity modulation of patterns of persistent or short lifetime dot elements. The experiments demonstrate an increase in vection magnitude and decrease in vection latency with binocular viewing. Experiments utilising short lifetime dot stereograms demonstrated that this binocular enhancement was due to the motion of stereoscopically defined features.\n},\n\tannote = {July 5th-8th, 2013. Suzhou, China},\n\tauthor = {Allison, R. S. and Ash, A. and Palmisano, S. A.},\n\tbooktitle = {The 9th Asia-Pacific Conference on Vision (APCV 2013). PsyCh},\n\tdate-added = {2013-06-12 23:23:43 +0000},\n\tdate-modified = {2014-02-09 01:36:20 +0000},\n\tdoi = {10.1002/pchj.33},\n\tjournal = {PsyCh Journal},\n\tkeywords = {Stereopsis, Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {06},\n\tpages = {48},\n\ttitle = {Binocular contributions to linear vection},\n\turl = {http://www.apcv2013.org/images/APCV-program-full-version.pdf},\n\turl-1 = {http://www.apcv2013.org/images/APCV-program-full-version.pdf},\n\tvolume = {2 (Suppl 1)},\n\tyear = {2013},\n\turl-1 = {http://www.apcv2013.org/images/APCV-program-full-version.pdf},\n\turl-2 = {https://doi.org/10.1002/pchj.33}}\n\n
\n
\n\n\n
\n Compelling illusions of self motion, known as vection, can be produced in a stationary observer by visual stimulation alone. The role of binocular vision and stereopsis in these illusions was explored in a series of three experiments. Linear vertical vection was produced by upward or downward translation of stereoscopic surfaces. The surfaces were horizontally-oriented depth corrugations produced by disparity modulation of patterns of persistent or short lifetime dot elements. The experiments demonstrate an increase in vection magnitude and decrease in vection latency with binocular viewing. Experiments utilising short lifetime dot stereograms demonstrated that this binocular enhancement was due to the motion of stereoscopically defined features. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vection during treadmill walking, walking on the spot and standing still.\n \n \n \n \n\n\n \n Palmisano, S. A., Ash, A., Govan, D. G., & Allison, R. S.\n\n\n \n\n\n\n In 40th Australasian Experimental Psychology Conference, April 3-6, 2013, Adelaide, Australia, pages 61. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Vection-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2013fk,\n\tabstract = {Traditionally vection studies have induced visual illusions of self-motion in physically stationary observers. Recently, two studies examined vection during treadmill walking. While one study found that treadmill walking in the same direction as the simulated self-motion impaired vection (Onimaru et al. 2010), the other found that this same situation enhanced vection (Seno et al. 2011).  This study expands on these earlier investigations of active vection.  Our subjects viewed radial optic flow (simulating forwards/backwards self-motion) while (a) walking forward on a treadmill at a matched speed, (b) walking on the spot or (c) standing still. On half the trials, the subject's head-tracked physical head movements were updated directly into the self-motion display producing simulated viewpoint jitter.  On the remainder, subjects viewed non-jittering optic flow (as in the two earlier studies). We found an overall reduction in the vection induced for all three walking conditions (consistent and inconsistent treadmill walking, as well as walking on the spot) compared to stationary viewing condition.  However, the addition of consistent simulated viewpoint oscillation to the self-motion display always improved vection (in both walking and stationary conditions alike). These findings suggest that complex multisensory interactions are involved in the perception self-motion.\n\n },\n\tauthor = {Palmisano, S. A. and Ash, A. and Govan, D. G. and Allison, R. S.},\n\tbooktitle = {40th Australasian Experimental Psychology Conference, April 3-6, 2013, Adelaide, Australia},\n\tdate-added = {2013-04-02 23:40:16 +0000},\n\tdate-modified = {2013-04-02 23:45:07 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {61},\n\ttitle = {Vection during treadmill walking, walking on the spot and standing still.},\n\turl-1 = {https://www.adelaide.edu.au/epc2013/abstracts/120.html},\n\tyear = {2013}}\n\n
\n
\n\n\n
\n Traditionally vection studies have induced visual illusions of self-motion in physically stationary observers. Recently, two studies examined vection during treadmill walking. While one study found that treadmill walking in the same direction as the simulated self-motion impaired vection (Onimaru et al. 2010), the other found that this same situation enhanced vection (Seno et al. 2011). This study expands on these earlier investigations of active vection. Our subjects viewed radial optic flow (simulating forwards/backwards self-motion) while (a) walking forward on a treadmill at a matched speed, (b) walking on the spot or (c) standing still. On half the trials, the subject's head-tracked physical head movements were updated directly into the self-motion display producing simulated viewpoint jitter. On the remainder, subjects viewed non-jittering optic flow (as in the two earlier studies). We found an overall reduction in the vection induced for all three walking conditions (consistent and inconsistent treadmill walking, as well as walking on the spot) compared to stationary viewing condition. However, the addition of consistent simulated viewpoint oscillation to the self-motion display always improved vection (in both walking and stationary conditions alike). These findings suggest that complex multisensory interactions are involved in the perception self-motion. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Cyber (Motion) Sickness In Active Stereoscopic 3d Gaming.\n \n \n \n \n\n\n \n Benzeroual, K., & Allison, R. S.\n\n\n \n\n\n\n In IEEE International Conference on 3D Imaging (IC3D), pages 1-7, December 3-5 2013. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"CyberPaper\n  \n \n \n \"Cyber-1\n  \n \n \n \"Cyber-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Benzeroual:2013fk,\n\tabstract = {Mass-market stereoscopic 3D gaming has recently become a reality on both gaming consoles and PCs. At the same time the success of devices such as the Nintendo Wii, Nintendo Wii Balance Board, Sony Move and Microsoft Kinect have made active movement of the head, limbs and body a key means of interaction in many games. We hypothesized that players may be more prone to cybersickness symptoms in stereoscopic 3D games based on active movement compared to similar games played with controllers or other devices, which do not require physical movement of the body with the exception of the hands and fingers. Two experimental games were developed to test this hypothesis while keeping other parameters as constant as possible. For the disorientation and oculomotor cybersickness subscales and the overall score of the Simulator Sickness Questionnaire, a significant interaction between display mode (S3D versus non-stereoscopic) and motion sickness susceptibility was found. However, contrary to our hypothesis, there was no indication that participants were particularly susceptible to cybersickness in S3D motion controller games.},\n\tannote = {Dec 2013, Liege Belgium},\n\tauthor = {Benzeroual, K. and Allison, R. S.},\n\tbooktitle = {{IEEE} International Conference on 3D Imaging (IC3D)},\n\tdate-added = {2013-11-11 19:30:47 +0000},\n\tdate-modified = {2014-09-26 02:00:48 +0000},\n\tdoi = {10.1109/IC3D.2013.6732090},\n\tkeywords = {Stereopsis, Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {December 3-5},\n\tpages = {1-7},\n\tpublisher = {{IEEE}},\n\ttitle = {Cyber (Motion) Sickness In Active Stereoscopic 3d Gaming},\n\turl = {http://percept.eecs.yorku.ca/papers/cybersickness.pdf},\n\turl-1 = {http://dx.doi.org/10.1109/IC3D.2013.6732090},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/cybersickness.pdf},\n\turl-2 = {https://doi.org/10.1109/IC3D.2013.6732090}}\n\n
\n
\n\n\n
\n Mass-market stereoscopic 3D gaming has recently become a reality on both gaming consoles and PCs. At the same time the success of devices such as the Nintendo Wii, Nintendo Wii Balance Board, Sony Move and Microsoft Kinect have made active movement of the head, limbs and body a key means of interaction in many games. We hypothesized that players may be more prone to cybersickness symptoms in stereoscopic 3D games based on active movement compared to similar games played with controllers or other devices, which do not require physical movement of the body with the exception of the hands and fingers. Two experimental games were developed to test this hypothesis while keeping other parameters as constant as possible. For the disorientation and oculomotor cybersickness subscales and the overall score of the Simulator Sickness Questionnaire, a significant interaction between display mode (S3D versus non-stereoscopic) and motion sickness susceptibility was found. However, contrary to our hypothesis, there was no indication that participants were particularly susceptible to cybersickness in S3D motion controller games.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Homography for Real-time Image Un-distortion.\n \n \n \n \n\n\n \n Chen, J., Benzeroual, K., & Allison, R. S.\n\n\n \n\n\n\n In IEEE International Conference on 3D Imaging (IC3D)., pages 1-8, 12 2013. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Robust-1\n  \n \n \n \"Robust-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Chen:2013fk,\n\tabstract = {Stereoscopic 3D film production has increased the need for efficient and robust camera calibration and tracking. Many of these tasks involve making planar correspondence and thus accurate fast homography estimation is essential. However, homography estimation may fail with distorted images since the planar projected corners may be distorted far away from the ``perfect'' locations. On the other hand, precisely estimating lens distortion from a single image is still a challenge, especially in real-time applications. In this paper, we drop the assumption that the image distortion is negligible in homography estimation. We propose robust homography as a simple and efficient approach which combines homography mapping and image distortion estimation in a least square constraint. Our method can simultaneously estimate homography and image distortion from a single image in real-time. Compared with previous methods, it has two advantages: first, un-distortion can be achieved with little overhead due to the need for only a single calibration image and the real-time homography mapping of easy to track corners; second, due to the use of precise calibration targets the accuracy of our method is comparable to the multiple image calibration methods. In an experimental evaluation, we show that our method can accurately estimate image distortion parameters in both synthetic and real images. We also present its applications in close range un-distortion and robust corner detection.\n},\n\tannote = {Liege Belgium Dec 3-5, 2013},\n\tauthor = {Chen, J. and Benzeroual, K. and Allison, R. S.},\n\tbooktitle = {{IEEE} International Conference on 3D Imaging (IC3D).},\n\tdate-added = {2013-11-11 19:30:47 +0000},\n\tdate-modified = {2014-09-26 02:21:37 +0000},\n\tdoi = {10.1109/IC3D.2013.6732075},\n\tjournal = {International Conference on 3D Imaging (IC3D)},\n\tkeywords = {Misc.},\n\tmonth = {12},\n\tpages = {1-8},\n\tpublisher = {{IEEE}},\n\ttitle = {Robust Homography for Real-time Image Un-distortion},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/robust%20homography.pdf},\n\turl-2 = {https://doi.org/10.1109/IC3D.2013.6732075}}\n\n
\n
\n\n\n
\n Stereoscopic 3D film production has increased the need for efficient and robust camera calibration and tracking. Many of these tasks involve making planar correspondence and thus accurate fast homography estimation is essential. However, homography estimation may fail with distorted images since the planar projected corners may be distorted far away from the ``perfect'' locations. On the other hand, precisely estimating lens distortion from a single image is still a challenge, especially in real-time applications. In this paper, we drop the assumption that the image distortion is negligible in homography estimation. We propose robust homography as a simple and efficient approach which combines homography mapping and image distortion estimation in a least square constraint. Our method can simultaneously estimate homography and image distortion from a single image in real-time. Compared with previous methods, it has two advantages: first, un-distortion can be achieved with little overhead due to the need for only a single calibration image and the real-time homography mapping of easy to track corners; second, due to the use of precise calibration targets the accuracy of our method is comparable to the multiple image calibration methods. In an experimental evaluation, we show that our method can accurately estimate image distortion parameters in both synthetic and real images. We also present its applications in close range un-distortion and robust corner detection. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n [Invited Talk] The perceptual consequences of vergence eye movements: A brief review.\n \n \n \n \n\n\n \n Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the IEICE-Human Information Processing technical committee conference, Kyoto, Japan, Sept 12-13, 2013, volume HIP2013-55, pages 29-34, 2013. Technical Report of IEICE\n \n\n\n\n
\n\n\n\n \n \n \"[InvitedPaper\n  \n \n \n \"[Invited-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Allison:2013uq,\n\tabstract = {Vergence eye movements are a key factor in stereoscopic depth perception. In this brief review I will outline work at York University on select aspects of the relation between vergence and stereopsis, sensory fusion and depth constancy. \n\nKey words  Vergence, depth perception, distance perception, eye movements, depth constancy, fusion\n},\n\tannote = {Kyoto, Japan, Sept 12-13, 2013},\n\tauthor = {Allison, R. S.},\n\tbooktitle = {Proceedings of the IEICE-Human Information Processing technical committee conference, Kyoto, Japan, Sept 12-13, 2013},\n\tdate-added = {2013-08-22 07:58:10 +0000},\n\tdate-modified = {2013-10-09 14:21:28 +0000},\n\tkeywords = {Eye Movements & Tracking},\n\torganization = {Technical Report of IEICE},\n\tpages = {29-34},\n\ttitle = {[Invited Talk] The perceptual consequences of vergence eye movements: A brief review},\n\turl = {http://www.eecs.yorku.ca/percept/papers/IEICE Kyoto paper.pdf},\n\tvolume = {HIP2013-55},\n\tyear = {2013},\n\turl-1 = {http://www.eecs.yorku.ca/percept/papers/IEICE%20Kyoto%20paper.pdf}}\n\n
\n
\n\n\n
\n Vergence eye movements are a key factor in stereoscopic depth perception. In this brief review I will outline work at York University on select aspects of the relation between vergence and stereopsis, sensory fusion and depth constancy. Key words Vergence, depth perception, distance perception, eye movements, depth constancy, fusion \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio-Visual Integration in Stereoscopic 3D.\n \n \n \n \n\n\n \n Deas, L., Wilcox, L. M., Kazimi, A., & Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the ACM Symposium on Applied Perception, Dublin, Ireland, pages 83-89, 09 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Audio-VisualPaper\n  \n \n \n \"Audio-Visual-1\n  \n \n \n \"Audio-Visual-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Deas:2013kx,\n\tabstract = {The perception of synchronous, intelligible, speech is fundamental to a high-quality modern cinema experience. Surprisingly, this issue has remained relatively unexplored in stereoscopic 3D (S3D) media, despite its increasing popularity. Instead, visual parameters have been the primary focus of concern for those who create, and those who study the impact of, S3D content. In the work presented here we ask if ability to integrate audio and visual information is influenced by adding the third dimension to film. We also investigate the effects of known visual parameters (horizontal and vertical parallax), on audio-visual integration. To this end, we use an illusion of speech processing known as the McGurk effect as an objective measure of multi-modal integration. In the classic (2D) version of this phenomenon, discrepant auditory (/ba/) and visual (/ga/) information typically results in the perception of a unique `fusion' syllable (e.g. /da/). We extended this paradigm to measure the McGurk effect in a small theatre. We varied the horizontal (IA: 0, 6, 12, 18, 24 mm) and vertical (0, 0.5, 0.75, 1 deg) parallax from trial-to-trial and asked observers to report their percept of the phoneme. Our results show a consistently high proportion of the expected fusion responses, with no effect of horizontal or vertical offsets. These data are the first to show that the McGurk effect extends to stereoscopic stimuli and is not a phenomenon isolated to 2D media perception. Furthermore, the results show that audiences can tolerate a high level of both horizontal and vertical disparity and maintain veridical speech perception. We consider these results in terms of current stereoscopic filmmaking recommendations and practices.\n},\n\tannote = {Dublin, Sept 2013},\n\tauthor = {Deas, L. and Wilcox, L. M. and Kazimi, A. and Allison, R. S.},\n\tbooktitle = {Proceedings of the ACM Symposium on Applied Perception, Dublin, Ireland},\n\tdate-added = {2013-06-12 23:20:52 +0000},\n\tdate-modified = {2019-02-03 09:36:53 -0500},\n\tdoi = {10.1145/2492494.2492506},\n\tkeywords = {Stereopsis},\n\tmonth = {09},\n\tpages = {83-89},\n\ttitle = {Audio-Visual Integration in Stereoscopic 3D},\n\turl = {http://percept.eecs.yorku.ca/papers/p83-deas.pdf},\n\tyear = {2013},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/p83-deas.pdf},\n\turl-2 = {https://doi.org/10.1145/2492494.2492506}}\n\n
\n
\n\n\n
\n The perception of synchronous, intelligible, speech is fundamental to a high-quality modern cinema experience. Surprisingly, this issue has remained relatively unexplored in stereoscopic 3D (S3D) media, despite its increasing popularity. Instead, visual parameters have been the primary focus of concern for those who create, and those who study the impact of, S3D content. In the work presented here we ask if ability to integrate audio and visual information is influenced by adding the third dimension to film. We also investigate the effects of known visual parameters (horizontal and vertical parallax), on audio-visual integration. To this end, we use an illusion of speech processing known as the McGurk effect as an objective measure of multi-modal integration. In the classic (2D) version of this phenomenon, discrepant auditory (/ba/) and visual (/ga/) information typically results in the perception of a unique `fusion' syllable (e.g. /da/). We extended this paradigm to measure the McGurk effect in a small theatre. We varied the horizontal (IA: 0, 6, 12, 18, 24 mm) and vertical (0, 0.5, 0.75, 1 deg) parallax from trial-to-trial and asked observers to report their percept of the phoneme. Our results show a consistently high proportion of the expected fusion responses, with no effect of horizontal or vertical offsets. These data are the first to show that the McGurk effect extends to stereoscopic stimuli and is not a phenomenon isolated to 2D media perception. Furthermore, the results show that audiences can tolerate a high level of both horizontal and vertical disparity and maintain veridical speech perception. We consider these results in terms of current stereoscopic filmmaking recommendations and practices. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaze-Contingent Simulations of Visual Defects in Virtual Environment: Challenges and Limitations.\n \n \n \n \n\n\n \n Vinnikov, M., & Allison, R. S.\n\n\n \n\n\n\n In ACM CHI 2013 Workshop Gaze Interaction in the Post-WIMP World, pages VA13, 1-4, Paris, France, 04 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Gaze-ContingentPaper\n  \n \n \n \"Gaze-Contingent-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Vinnikov2013chi,\n\taddress = {Paris, France},\n\tauthor = {Vinnikov, M. and Allison, R. S.},\n\tbooktitle = {{ACM CHI} 2013 Workshop Gaze Interaction in the Post-{WIMP} World},\n\tdate-added = {2013-02-11 00:00:00 -0400},\n\tdate-modified = {2016-01-03 03:24:16 +0000},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {04},\n\tpages = {VA13, 1-4},\n\ttitle = {Gaze-Contingent Simulations of Visual Defects in Virtual Environment: Challenges and Limitations},\n\turl = {http://gaze-interaction.net/wp-system/wp-content/uploads/2013/04/VA13.pdf},\n\turl-1 = {http://gaze-interaction.net/wp-system/wp-content/uploads/2013/04/VA13.pdf},\n\tyear = {2013},\n\turl-1 = {http://gaze-interaction.net/wp-system/wp-content/uploads/2013/04/VA13.pdf}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (5)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The relative contributions of radial and laminar optic flow to the perception of linear self-motion.\n \n \n \n \n\n\n \n Harris, L. R., Herpers, R., Jenkin, M., Allison, R. S., Jenkin, H., Kapralos, B., Scherfgen, D., & Felsner, S.\n\n\n \n\n\n\n Journal of Vision, 12(10). 09 2012.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{harris_relative_2012,\n\tabstract = {When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, and Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40 deg(h) x 24 deg(v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, and Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.},\n\tauthor = {Harris, Laurence R. and Herpers, Rainer and Jenkin, Michael and Allison, Robert S. and Jenkin, Heather and Kapralos, Bill and Scherfgen, David and Felsner, Sandra},\n\tdate-added = {2013-02-07 01:40:26 +0000},\n\tdate-modified = {2013-02-07 01:40:26 +0000},\n\tdoi = {10.1167/12.10.7},\n\tfile = {Full Text PDF:/Users/robertallison/Library/Application Support/Firefox/Profiles/thhmbgl4.default/zotero/storage/VNA249TC/Harris et al. - 2012 - The relative contributions of radial and laminar o.pdf:application/pdf;Snapshot:/Users/robertallison/Library/Application Support/Firefox/Profiles/thhmbgl4.default/zotero/storage/PCG2IGFI/7.html:text/html},\n\tissn = {1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tlanguage = {en},\n\tmonth = 09,\n\tnumber = {10},\n\ttitle = {The relative contributions of radial and laminar optic flow to the perception of linear self-motion},\n\turl = {http://www.journalofvision.org/content/12/10/7},\n\turl-1 = {http://www.journalofvision.org/content/12/10/7},\n\turl-2 = {http://dx.doi.org/10.1167/12.10.7},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/10/7},\n\turl-2 = {https://doi.org/10.1167/12.10.7}}\n\n
\n
\n\n\n
\n When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, and Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40 deg(h) x 24 deg(v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, and Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Da Vinci decoded: Does da Vinci stereopsis rely on disparity?.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Journal of Vision, 12(12). 2012.\n \n\n\n\n
\n\n\n\n \n \n \"DaPaper\n  \n \n \n \"Da-1\n  \n \n \n \"Da-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin01112012,\n\tabstract = {In conventional stereopsis, the depth between two objects is computed based on the retinal disparity in the position of matching points in the two eyes. When an object is occluded by another object in the scene, so that it is visible only in one eye, its retinal disparity cannot be computed. Nakayama and Shimojo (1990) found that a percept of quantitative depth between the two objects could still be established for such stimuli and proposed that this percept is based on the constraints imposed by occlusion geometry. They named this and other occlusion-based depth phenomena ``da Vinci stereopsis.'' Subsequent research found quantitative depth based on occlusion geometry in several other classes of stimuli grouped under the term da Vinci stereopsis. However, Nakayama and Shimojo's findings were later brought into question by Gillam, Cook, and Blackburn (2003), who suggested that quantitative depth in their stimuli was perceived based on conventional disparity. In order to understand whether da Vinci stereopsis relies on one type of mechanism or whether its function is stimulus dependent we examine the nature and source of depth in the class of stimuli used by Nakayama and Shimojo (1990). We use three different psychophysical and computational methods to show that the most likely source for depth in these stimuli is occlusion geometry. Based on these experiments and previous data we discuss the potential mechanisms responsible for processing depth from monocular features in da Vinci stereopsis.},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie M. and Allison, Robert S.},\n\tdate-added = {2012-11-04 23:57:04 +0000},\n\tdate-modified = {2012-11-04 23:57:19 +0000},\n\tdoi = {10.1167/12.12.2},\n\teprint = {http://www.journalofvision.org/content/12/12/2.full.pdf+html},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {12},\n\ttitle = {Da Vinci decoded: Does da Vinci stereopsis rely on disparity?},\n\turl = {http://www.journalofvision.org/content/12/12/2.abstract},\n\turl-1 = {http://www.journalofvision.org/content/12/12/2.abstract},\n\turl-2 = {http://dx.doi.org/10.1167/12.12.2},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/12/2.abstract},\n\turl-2 = {https://doi.org/10.1167/12.12.2}}\n\n
\n
\n\n\n
\n In conventional stereopsis, the depth between two objects is computed based on the retinal disparity in the position of matching points in the two eyes. When an object is occluded by another object in the scene, so that it is visible only in one eye, its retinal disparity cannot be computed. Nakayama and Shimojo (1990) found that a percept of quantitative depth between the two objects could still be established for such stimuli and proposed that this percept is based on the constraints imposed by occlusion geometry. They named this and other occlusion-based depth phenomena ``da Vinci stereopsis.'' Subsequent research found quantitative depth based on occlusion geometry in several other classes of stimuli grouped under the term da Vinci stereopsis. However, Nakayama and Shimojo's findings were later brought into question by Gillam, Cook, and Blackburn (2003), who suggested that quantitative depth in their stimuli was perceived based on conventional disparity. In order to understand whether da Vinci stereopsis relies on one type of mechanism or whether its function is stimulus dependent we examine the nature and source of depth in the class of stimuli used by Nakayama and Shimojo (1990). We use three different psychophysical and computational methods to show that the most likely source for depth in these stimuli is occlusion geometry. Based on these experiments and previous data we discuss the potential mechanisms responsible for processing depth from monocular features in da Vinci stereopsis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perception of smooth and perturbed vection in short-duration microgravity.\n \n \n \n \n\n\n \n Allison, R., Zacher, J. E., Kirollos, R., Guterman, P., & Palmisano, S.\n\n\n \n\n\n\n Experimental Brain Research, 223(4): 479-487. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"PerceptionPaper\n  \n \n \n \"Perception-1\n  \n \n \n \"Perception-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2012uq,\n\tabstract = {Successful adaptation to the microgravity environment of space and readaptation to gravity on earth requires recalibration of visual and vestibular signals. Recently, we have shown that adding simulated viewpoint oscillation to visual self-motion displays produces more compelling vection (despite the expected increase in visual-vestibular conflict experienced by stationary observers). Currently, it is unclear what role adaptation to gravity might play in this oscillation-based vection advantage. The vection elicited by optic flow displays simulating either smooth forward motion or forward motion perturbed by viewpoint oscillation was assessed before, during and after microgravity exposure in parabolic flight. During normal 1-g conditions subjects experienced significantly stronger vection for oscillating compared to smooth radial optic flow. The magnitude of this oscillation enhancement was reduced during short-term microgravity exposure, more so for simulated interaural (as opposed to spinal) axis viewpoint oscillation. We also noted a small overall reduction in vection sensitivity post-flight. A supplementary experiment found that 1-g vection responses did not vary significantly across multiple testing sessions. These findings: (i) demonstrate that the oscillation advantage for vection is very stable and repeatable during 1-g conditions and (ii) imply that adaptation or conditioned responses played a role in the post-flight vection reductions. The effects observed in microgravity are discussed in terms of the ecology of terrestrial locomotion and the nature of movement in microgravity. },\n\tauthor = {Allison, R.S. and Zacher, J. E. and Kirollos, R. and Guterman, P.S. and Palmisano, S.A.},\n\tdate-added = {2012-09-13 03:57:33 +0000},\n\tdate-modified = {2014-09-26 01:48:02 +0000},\n\tdoi = {10.1007/s00221-012-3275-5},\n\tjournal = {Experimental Brain Research},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {479-487},\n\ttitle = {Perception of smooth and perturbed vection in short-duration microgravity},\n\turl = {http://percept.eecs.yorku.ca/papers/microg preprint 2012.pdf},\n\turl-1 = {http://dx.doi.org/10.1007/s00221-012-3275-5},\n\tvolume = {223},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/microg%20preprint%202012.pdf},\n\turl-2 = {https://doi.org/10.1007/s00221-012-3275-5}}\n\n
\n
\n\n\n
\n Successful adaptation to the microgravity environment of space and readaptation to gravity on earth requires recalibration of visual and vestibular signals. Recently, we have shown that adding simulated viewpoint oscillation to visual self-motion displays produces more compelling vection (despite the expected increase in visual-vestibular conflict experienced by stationary observers). Currently, it is unclear what role adaptation to gravity might play in this oscillation-based vection advantage. The vection elicited by optic flow displays simulating either smooth forward motion or forward motion perturbed by viewpoint oscillation was assessed before, during and after microgravity exposure in parabolic flight. During normal 1-g conditions subjects experienced significantly stronger vection for oscillating compared to smooth radial optic flow. The magnitude of this oscillation enhancement was reduced during short-term microgravity exposure, more so for simulated interaural (as opposed to spinal) axis viewpoint oscillation. We also noted a small overall reduction in vection sensitivity post-flight. A supplementary experiment found that 1-g vection responses did not vary significantly across multiple testing sessions. These findings: (i) demonstrate that the oscillation advantage for vection is very stable and repeatable during 1-g conditions and (ii) imply that adaptation or conditioned responses played a role in the post-flight vection reductions. The effects observed in microgravity are discussed in terms of the ecology of terrestrial locomotion and the nature of movement in microgravity. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Influence of head orientation and viewpoint oscillation on linear vection.\n \n \n \n \n\n\n \n Guterman, P., Allison, R. S., Palmisano, S., & Zacher, J. E.\n\n\n \n\n\n\n Journal of Vestibular Research, 22(2-3): 105-116. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"InfluencePaper\n  \n \n \n \"Influence-1\n  \n \n \n \"Influence-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Guterman:fk,\n\tabstract = {Sensory conflict theories predict that adding simulated viewpoint oscillation to self-motion displays should generate significant and sustained visual-vestibular conflict and reduce the likelihood of illusory self-motion (vection). However, research shows that viewpoint oscillation enhances vection in upright observers. This study examined whether the oscillation advantage for vection depends on head orientation with respect to gravity. Displays that simulated forward/backward self-motion with/without horizontal and vertical viewpoint oscillation were presented to observers in upright (seated and standing) and lying (supine, prone, and left side down) body postures. Viewpoint oscillation was found to enhance vection for all of the body postures tested. Vection also tended to be stronger in upright postures than in lying postures. Changing the orientation of the head with respect to gravity was expected to alter the degree/saliency of the sensory conflict, which may explain the overall posture-based differences in vection strength. However, this does not explain why the oscillation advantage for vection persisted for all postures. Thus, the current postural and oscillation based vection findings appear to be better explained by ecology: Upright postures and oscillating flow (that are the norm during self-motion) improved vection, whereas lying postures and smooth optic flows (which are less common) impaired vection.},\n\tauthor = {Guterman, P. and Allison, R. S. and Palmisano, S.A. and Zacher, J. E.},\n\tdate-added = {2012-05-25 23:31:32 -0400},\n\tdate-modified = {2014-09-26 01:49:30 +0000},\n\tdoi = {10.3233/VES-2012-0448},\n\tjournal = {Journal of Vestibular Research},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2-3},\n\tpages = {105-116},\n\ttitle = {Influence of head orientation and viewpoint oscillation on linear vection},\n\turl = {http://percept.eecs.yorku.ca/papers/Guterman 2012.pdf},\n\turl-1 = {http://dx.doi.org/10.3233/VES-2012-0448},\n\tvolume = {22},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/Guterman%202012.pdf},\n\turl-2 = {https://doi.org/10.3233/VES-2012-0448}}\n\n
\n
\n\n\n
\n Sensory conflict theories predict that adding simulated viewpoint oscillation to self-motion displays should generate significant and sustained visual-vestibular conflict and reduce the likelihood of illusory self-motion (vection). However, research shows that viewpoint oscillation enhances vection in upright observers. This study examined whether the oscillation advantage for vection depends on head orientation with respect to gravity. Displays that simulated forward/backward self-motion with/without horizontal and vertical viewpoint oscillation were presented to observers in upright (seated and standing) and lying (supine, prone, and left side down) body postures. Viewpoint oscillation was found to enhance vection for all of the body postures tested. Vection also tended to be stronger in upright postures than in lying postures. Changing the orientation of the head with respect to gravity was expected to alter the degree/saliency of the sensory conflict, which may explain the overall posture-based differences in vection strength. However, this does not explain why the oscillation advantage for vection persisted for all postures. Thus, the current postural and oscillation based vection findings appear to be better explained by ecology: Upright postures and oscillating flow (that are the norm during self-motion) improved vection, whereas lying postures and smooth optic flows (which are less common) impaired vection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopy and the Human Visual System.\n \n \n \n \n\n\n \n Banks, M. S., Read, J. R., Allison, R. S., & Watt, S. J.\n\n\n \n\n\n\n SMPTE Motion Imaging (Winner of 2013 SMPTE Journal Certificate of Merit, also appears in SMPTE International Conference on Stereoscopic 3D for Media and Entertainment Conference proceedings), 121(4): 24-43. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopy-1\n  \n \n \n \"Stereoscopy-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Banks:2012kx,\n\tabstract = {Stereoscopic displays have become very important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, computer-assisted design, and more. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. It is important in these applications for stereo 3D imagery to create a faithful impression of the 3D structure of the scene being portrayed. It is also important that the viewer is comfortable and does not leave the experience with eye fatigue or a headache. And that the presentation of the stereo images does not create temporal artifacts like flicker or motion judder. \nHere we review current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: 1) Getting the geometry right; 2) depth cue interactions in stereo 3D media; 3) focusing and fixating on stereo images; and 4) temporal presentation protocols: Flicker, motion artifacts, and depth distortion. \n},\n\tauthor = {Martin S. Banks and Jenny R. Read and Robert S. Allison and Simon J. Watt},\n\tdate-added = {2012-04-30 18:57:25 -0400},\n\tdate-modified = {2013-07-18 04:25:20 +0000},\n\tdoi = {10.5594/j18173},\n\tjournal = {SMPTE Motion Imaging (Winner of 2013 SMPTE Journal Certificate of Merit, also appears in SMPTE International Conference on Stereoscopic 3D for Media and Entertainment Conference proceedings)},\n\tkeywords = {Stereopsis},\n\tnumber = {4},\n\tpages = {24-43},\n\ttitle = {Stereoscopy and the Human Visual System},\n\turl-1 = {http://dx.doi.org/10.5594/j18173},\n\turl-2 = {http://dx.doi.org/10.5594/j18173},\n\tvolume = {121},\n\tyear = {2012},\n\turl-1 = {https://doi.org/10.5594/j18173}}\n\n
\n
\n\n\n
\n Stereoscopic displays have become very important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, computer-assisted design, and more. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. It is important in these applications for stereo 3D imagery to create a faithful impression of the 3D structure of the scene being portrayed. It is also important that the viewer is comfortable and does not leave the experience with eye fatigue or a headache. And that the presentation of the stereo images does not create temporal artifacts like flicker or motion judder. Here we review current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: 1) Getting the geometry right; 2) depth cue interactions in stereo 3D media; 3) focusing and fixating on stereo images; and 4) temporal presentation protocols: Flicker, motion artifacts, and depth distortion. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion aftereffect in depth based on binocular information.\n \n \n \n \n\n\n \n Sakano, Y., Allison, R., & Howard, I.\n\n\n \n\n\n\n Journal of Vision, 12(1, Article 11:): 1–15. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Motion-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{Sakano:2011kx,\n\tabstract = {We examined whether a negative motion aftereffect occurs in the depth direction following adaptation to motion in depth based on changing disparity and/or interocular velocity differences. To dissociate these cues, we used three types of adapters: random-element stereograms that were correlated (1) temporally and binocularly, (2) temporally but not binocularly, and (3) binocularly but not temporally. Only the temporally correlated adapters contained coherent interocular velocity differences while only the binocularly correlated adapters contained coherent changing disparity. A motion aftereffect in depth occurred after adaptation to the temporally correlated stereograms while little or no aftereffect occurred following adaptation to the temporally uncorrelated stereograms. Interestingly, a monocular test pattern also showed a comparable motion aftereffect in a diagonal direction in depth after adaptation to the temporally correlated stereograms. The lack of the aftereffect following adaptation to pure changing disparity was also confirmed using spatially separated random-dot patterns. These results are consistent with the existence of a mechanism sensitive to interocular velocity differences, which is adaptable (at least in part) at binocular stages of motion-in-depth processing. We did not find any evidence for the existence of an ``adaptable'' mechanism specialized to see motion in depth based on changing disparity. },\n\tauthor = {Sakano, Y. and Allison, R.S. and Howard, I.P.},\n\tdate-added = {2011-12-16 18:52:35 +0000},\n\tdate-modified = {2019-02-03 09:07:15 -0500},\n\tdoi = {10.1167/12.1.11},\n\tjournal = {Journal of Vision},\n\tkeywords = {Motion in depth, Stereopsis},\n\tnumber = {1, Article 11:},\n\tpages = {1--15},\n\ttitle = {Motion aftereffect in depth based on binocular information},\n\turl-1 = {http://dx.doi.org/10.1167/12.1.11},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {https://doi.org/10.1167/12.1.11}}\n\n
\n
\n\n\n
\n We examined whether a negative motion aftereffect occurs in the depth direction following adaptation to motion in depth based on changing disparity and/or interocular velocity differences. To dissociate these cues, we used three types of adapters: random-element stereograms that were correlated (1) temporally and binocularly, (2) temporally but not binocularly, and (3) binocularly but not temporally. Only the temporally correlated adapters contained coherent interocular velocity differences while only the binocularly correlated adapters contained coherent changing disparity. A motion aftereffect in depth occurred after adaptation to the temporally correlated stereograms while little or no aftereffect occurred following adaptation to the temporally uncorrelated stereograms. Interestingly, a monocular test pattern also showed a comparable motion aftereffect in a diagonal direction in depth after adaptation to the temporally correlated stereograms. The lack of the aftereffect following adaptation to pure changing disparity was also confirmed using spatially separated random-dot patterns. These results are consistent with the existence of a mechanism sensitive to interocular velocity differences, which is adaptable (at least in part) at binocular stages of motion-in-depth processing. We did not find any evidence for the existence of an ``adaptable'' mechanism specialized to see motion in depth based on changing disparity. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n Vision Research, 54(1): 1-11. 02 2012.\n \n\n\n\n
\n\n\n\n \n \n \"PerceptualPaper\n  \n \n \n \"Perceptual-1\n  \n \n \n \"Perceptual-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin:2011pa,\n\tabstract = {We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation.},\n\tauthor = {Tsirlin, I. and Allison, R.S. and Wilcox, L.M.},\n\tdate-added = {2011-11-17 19:32:31 -0500},\n\tdate-modified = {2014-09-26 01:22:44 +0000},\n\tdoi = {10.1016/j.visres.2011.11.013},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tmonth = {02},\n\tnumber = {1},\n\tpages = {1-11},\n\ttitle = {Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency},\n\turl = {http://percept.eecs.yorku.ca/papers/asymmetry.pdf},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2011.11.013},\n\tvolume = {54},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/asymmetry.pdf},\n\turl-2 = {https://doi.org/10.1016/j.visres.2011.11.013}}\n\n
\n
\n\n\n
\n We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visibility of Color Breakup Phenomena in Displays based on Narrowband Spectral Sources.\n \n \n \n \n\n\n \n Allison, R., Irving, E., Babu, R., Lillakas, L., Guthrie, S., & Wilcox, L.\n\n\n \n\n\n\n IEEE Journal of Display Technology, 8(4): 186 - 193. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"VisibilityPaper\n  \n \n \n \"Visibility-1\n  \n \n \n \"Visibility-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Allison:2012fk,\n\tabstract = {The hypothesis that artefacts related to chromatic aberration from eyeglasses may be more objectionable in laser projectors compared to conventional digital projectors was investigated. Untrained observers viewed movie clips in a theater and made image quality ratings. The same four clips were presented on both a standard Xenon display and a prototype laser projector in separate blocks. There was no evidence that observers noticed color break-up artefacts using either mode of presentation.},\n\tauthor = {Allison, R.S. and Irving, E.L. and Babu, R. and Lillakas, L. and Guthrie, S. and Wilcox, L.M.},\n\tdate-added = {2011-09-04 03:06:31 -0400},\n\tdate-modified = {2014-09-26 01:52:45 +0000},\n\tdoi = {10.1109/JDT.2011.2170957},\n\tjournal = {IEEE Journal of Display Technology},\n\tkeywords = {Misc.},\n\tnumber = {4},\n\tpages = {186 - 193},\n\ttitle = {Visibility of Color Breakup Phenomena in Displays based on Narrowband Spectral Sources.},\n\turl = {http://percept.eecs.yorku.ca/papers/JDT2170957.pdf},\n\turl-1 = {http://dx.doi.org/10.1109/JDT.2011.2170957},\n\tvolume = {8},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/JDT2170957.pdf},\n\turl-2 = {https://doi.org/10.1109/JDT.2011.2170957}}\n\n
\n
\n\n\n
\n The hypothesis that artefacts related to chromatic aberration from eyeglasses may be more objectionable in laser projectors compared to conventional digital projectors was investigated. Untrained observers viewed movie clips in a theater and made image quality ratings. The same four clips were presented on both a standard Xenon display and a prototype laser projector in separate blocks. There was no evidence that observers noticed color break-up artefacts using either mode of presentation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of crosstalk on depth magnitude in thin structures.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n Journal of Electronic Imaging (an earlier version also published in Electronic Imaging 2012: Stereoscopic Displays and Applications), 21: 011003.1-8. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin:2012ys,\n\tabstract = {Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison \\& Wilcox, 2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences. To address this question we used a psychophysical paradigm, where observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce observers' satisfaction.},\n\tauthor = {Tsirlin, I. and Allison, R.S. and Wilcox, L.M.},\n\tdate-added = {2011-08-10 13:41:50 -0400},\n\tdate-modified = {2018-11-25 14:40:06 -0500},\n\tdoi = {10.1117/1.JEI.21.1.011003},\n\tjournal = {Journal of Electronic Imaging (an earlier version also published in Electronic Imaging 2012: Stereoscopic Displays and Applications)},\n\tkeywords = {Stereopsis},\n\tpages = {011003.1-8},\n\ttitle = {The effect of crosstalk on depth magnitude in thin structures},\n\turl = {http://percept.eecs.yorku.ca/inna_jei.pdf},\n\turl-1 = {http://dx.doi.org/10.1117/1.JEI.21.1.011003},\n\tvolume = {21},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/inna_jei.pdf},\n\turl-2 = {https://doi.org/10.1117/1.JEI.21.1.011003}}\n\n
\n
\n\n\n
\n Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison & Wilcox, 2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences. To address this question we used a psychophysical paradigm, where observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce observers' satisfaction.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n .\n \n \n \n \n\n\n \n Allison, R., & Howard, I.\n\n\n \n\n\n\n Models of Disparity Detectors, pages 40-50. Howard, I., & Rogers, B., editor(s). Oxford University Press, New York, 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Models-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{Allison:2012fj,\n\taddress = {New York},\n\tauthor = {Allison, R.S and Howard, IP.},\n\tbooktitle = {Perceiving in Depth, Volume 2: {S}tereoscopic Vision},\n\tdate-added = {2012-07-02 19:43:53 -0400},\n\tdate-modified = {2012-07-02 20:46:45 -0400},\n\tdoi = {10.1093/acprof:oso/9780199764150.001.0001},\n\teditor = {I. Howard and B.J. Rogers},\n\tkeywords = {Depth perception},\n\tpages = {40-50},\n\tpublisher = {Oxford University Press},\n\ttitle = {Models of Disparity Detectors},\n\turl-1 = {http://dx.doi.org/10.1093/acprof:oso/9780199764150.001.0001},\n\tyear = {2012},\n\turl-1 = {https://doi.org/10.1093/acprof:oso/9780199764150.001.0001}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Optic flow and self-motion perception: The contribution of different parts of the field.\n \n \n \n \n\n\n \n Harris, L. R., Herpers, R., Jenkin, M., Allison, R. S., Jenkin, H., Kaprolos, B., Scherfgen, D., & Felsner, S.\n\n\n \n\n\n\n In Society for Neuroscience Abstracts, pages 672.14. Society for Neuroscience, 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Optic-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Harris:2012uq,\n\tannote = {New orleans\nOctober 13 -- 17, 2012},\n\tauthor = {Harris, L. R. and Herpers, R. and Jenkin, M. and Allison, R. S. and Jenkin, H. and Kaprolos, B. and Scherfgen, D. and Felsner, S.},\n\tbooktitle = {Society for Neuroscience Abstracts},\n\tdate-added = {2013-01-22 01:45:23 +0000},\n\tdate-modified = {2013-01-22 01:46:14 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {672.14},\n\tpublisher = {Society for Neuroscience},\n\ttitle = {Optic flow and self-motion perception: The contribution of different parts of the field},\n\turl-1 = {http://www.yorku.ca/harris/pubs/sfn_2012_bonn_rhein_sieg.pdf},\n\tyear = {2012}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of crosstalk on perceived depth in 3D displays.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n In OSA Fall Vision 2012, Journal of Vision, volume 12, pages 4–4. 12 2012.\n Sept 2012 Rochester, NY\n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{tsirlin_effect_2012,\n\tabstract = {Crosstalk in stereoscopic displays is defined as the leakage of one eye's image into the image of the other eye. All popular commercial stereoscopic viewing systems, including the ones used in movie theaters, suffer from crosstalk to some extent. It has been shown that crosstalk causes image distortions and reduces image quality. Moreover, it decreases visual comfort and affects one's ability to discriminate object shape and judge the relative depth of two objects. These results have potentially important implications for the quality and the accuracy of depth percepts in 3d display systems. To asses this hypothesis directly, we have explored the effect of crosstalk on the perceived magnitude of depth in a variety of stereoscopic stimuli. We found that with simple synthetic images increasing crosstalk beyond four percent resulted in a significant decrease in the magnitude of perceived depth, especially for larger disparities. This degradation was largely independent of the spatial separation of the ghost image. Further, we found qualitatively and quantitatively similar detrimental effects of crosstalk on perceived depth in complex images of natural scenes. The consistency of the negative impact of crosstalk, regardless of image complexity, suggests that it is not ameliorated by the presence of pictorial depth cues. We have recommended that display manufacturers keep crosstalk levels below the critical value of four percent to achieve optimal depth quality.\nMeeting abstract presented at {OSA} Fall Vision 2012},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie M. and Allison, Robert S.},\n\tbooktitle = {OSA Fall Vision 2012, Journal of Vision},\n\tdate-added = {2012-12-28 12:53:19 +0000},\n\tdate-modified = {2012-12-28 13:03:08 +0000},\n\tdoi = {10.1167/12.14.4},\n\tissn = {1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tlanguage = {en},\n\tmonth = 12,\n\tnote = {Sept 2012 Rochester, NY},\n\tnumber = {14},\n\tpages = {4--4},\n\ttitle = {The effect of crosstalk on perceived depth in {3D} displays},\n\turl = {http://www.journalofvision.org/content/12/14/4},\n\turl-1 = {http://www.journalofvision.org/content/12/14/4},\n\turl-2 = {http://dx.doi.org/10.1167/12.14.4},\n\turldate = {2012-12-28},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/14/4},\n\turl-2 = {https://doi.org/10.1167/12.14.4}}\n\n
\n
\n\n\n
\n Crosstalk in stereoscopic displays is defined as the leakage of one eye's image into the image of the other eye. All popular commercial stereoscopic viewing systems, including the ones used in movie theaters, suffer from crosstalk to some extent. It has been shown that crosstalk causes image distortions and reduces image quality. Moreover, it decreases visual comfort and affects one's ability to discriminate object shape and judge the relative depth of two objects. These results have potentially important implications for the quality and the accuracy of depth percepts in 3d display systems. To asses this hypothesis directly, we have explored the effect of crosstalk on the perceived magnitude of depth in a variety of stereoscopic stimuli. We found that with simple synthetic images increasing crosstalk beyond four percent resulted in a significant decrease in the magnitude of perceived depth, especially for larger disparities. This degradation was largely independent of the spatial separation of the ghost image. Further, we found qualitatively and quantitatively similar detrimental effects of crosstalk on perceived depth in complex images of natural scenes. The consistency of the negative impact of crosstalk, regardless of image complexity, suggests that it is not ameliorated by the presence of pictorial depth cues. We have recommended that display manufacturers keep crosstalk levels below the critical value of four percent to achieve optimal depth quality. Meeting abstract presented at OSA Fall Vision 2012\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Postural and viewpoint oscillation effects on the perception of self-motion.\n \n \n \n \n\n\n \n Guterman, P. S., Allison, R. S., Palmisano, S., & Zacher, J. E.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 12, pages 576–576. 08 2012.\n \n\n\n\n
\n\n\n\n \n \n \"PosturalPaper\n  \n \n \n \"Postural-1\n  \n \n \n \"Postural-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{guterman_postural_2012,\n\tabstract = {Adding viewpoint oscillation to displays increases the likelihood of visually induced self-motion (vection), even though sensory conflict theories predict that it should generate significant and sustained visual-vestibular conflict. This effect has been shown in upright observers, for which the simulated self-motion and oscillation were congruent with or orthogonal to gravity. Here we examined whether this oscillation advantage for vection depends on the orientation of the body with respect to gravity. Observers in upright (seated and standing) and lying (supine, prone, and left side down) postures viewed displays of radial optic flow simulating forward/backward self-motion, with or without horizontal or vertical viewpoint oscillation. Vection magnitude (compared to a reference stimulus), onset, duration, and vection dropouts, were compared among postures. Viewpoint oscillation enhanced vection for all of the body postures tested. Vection also tended to be stronger in upright than in lying postures. Changing body orientation with respect to gravity was expected to alter the degree/saliency of the sensory conflict, and may explain the posture-based differences in vection magnitude. However, this does not explain why the oscillation advantage for vection persisted for all postures. Given that the upright posture and oscillating flow (the norm during real self-motion) improved vection, and lying postures and smooth flow (which are atypical in our experience of self-motion) impaired vection, we conclude that postural and oscillation based vection findings are better explained by ecology.\nMeeting abstract presented at {VSS} 2012},\n\tauthor = {Guterman, Pearl S. and Allison, Robert S. and Palmisano, Stephen and Zacher, James E.},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2012-08-11 12:48:36 +0000},\n\tdate-modified = {2012-08-11 12:48:36 +0000},\n\tdoi = {10.1167/12.9.576},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/HWDAH4RK/576.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = 08,\n\tnumber = {9},\n\tpages = {576--576},\n\ttitle = {Postural and viewpoint oscillation effects on the perception of self-motion.},\n\turl = {http://www.journalofvision.org/content/12/9/576},\n\turl-1 = {http://www.journalofvision.org/content/12/9/576},\n\turl-2 = {http://dx.doi.org/10.1167/12.9.576},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/9/576},\n\turl-2 = {https://doi.org/10.1167/12.9.576}}\n\n
\n
\n\n\n
\n Adding viewpoint oscillation to displays increases the likelihood of visually induced self-motion (vection), even though sensory conflict theories predict that it should generate significant and sustained visual-vestibular conflict. This effect has been shown in upright observers, for which the simulated self-motion and oscillation were congruent with or orthogonal to gravity. Here we examined whether this oscillation advantage for vection depends on the orientation of the body with respect to gravity. Observers in upright (seated and standing) and lying (supine, prone, and left side down) postures viewed displays of radial optic flow simulating forward/backward self-motion, with or without horizontal or vertical viewpoint oscillation. Vection magnitude (compared to a reference stimulus), onset, duration, and vection dropouts, were compared among postures. Viewpoint oscillation enhanced vection for all of the body postures tested. Vection also tended to be stronger in upright than in lying postures. Changing body orientation with respect to gravity was expected to alter the degree/saliency of the sensory conflict, and may explain the posture-based differences in vection magnitude. However, this does not explain why the oscillation advantage for vection persisted for all postures. Given that the upright posture and oscillating flow (the norm during real self-motion) improved vection, and lying postures and smooth flow (which are atypical in our experience of self-motion) impaired vection, we conclude that postural and oscillation based vection findings are better explained by ecology. Meeting abstract presented at VSS 2012\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth from diplopic stimuli without vergence eye movements.\n \n \n \n \n\n\n \n Lugtigheid, A., Wilcox, L., Allison, R. S., & Howard, I.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 12, pages 451–451. 08 2012.\n \n\n\n\n
\n\n\n\n \n \n \"DepthPaper\n  \n \n \n \"Depth-1\n  \n \n \n \"Depth-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{lugtigheid_depth_2012,\n\tabstract = {It is well-established that stereoscopic depth is obtained over a large range of retinal disparities, including those that produce diplopia (double images). Under normal viewing conditions, observers make vergence eye movements to minimize large disparities, and it has been suggested that observers judge depth sign for diplopic stimuli by monitoring the vergence signal. Here we ask if vergence eye movements are required to judge depth order (disparity sign) of diplopic stimuli. We created an open-loop stimulus by presenting stereoscopic afterimages, for which eye movements cannot provide feedback about depth sign or magnitude. We produced afterimages of line stereograms consisting of precision-milled slits in aluminum plates that were back-illuminated by a photographic flash. Each half-image consisted of two thin (1x10mm) vertical slits, positioned above and below a small (1mm) fixation {LED.} The half-images were viewed through a modified mirror stereoscope, so that the fused image formed two narrow bars in the mid-sagittal plane. On each trial, the upper and lower bars were displaced in depth by one of five equal and opposite disparities (two in the range of fusion, one zero and two that were diplopic). After each presentation, observers (n=15) judged which bar was closer to them. Observers reliably judged the sign of disparity for both diplopic and fused images. We conclude that judgments of disparity sign for diplopic stimuli do not depend on extraretinal information, but are recovered directly from the retinal disparity signal.\nMeeting abstract presented at {VSS} 2012},\n\tauthor = {Lugtigheid, Arthur and Wilcox, Laurie and Allison, Robert S. and Howard, Ian},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2012-08-11 12:48:36 +0000},\n\tdate-modified = {2013-01-22 01:43:55 +0000},\n\tdoi = {10.1167/12.9.451},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/7NB9PTHQ/451.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tmonth = 08,\n\tnumber = {9},\n\tpages = {451--451},\n\ttitle = {Depth from diplopic stimuli without vergence eye movements},\n\turl = {http://www.journalofvision.org/content/12/9/451},\n\turl-1 = {http://www.journalofvision.org/content/12/9/451},\n\turl-2 = {http://dx.doi.org/10.1167/12.9.451},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/9/451},\n\turl-2 = {https://doi.org/10.1167/12.9.451}}\n\n
\n
\n\n\n
\n It is well-established that stereoscopic depth is obtained over a large range of retinal disparities, including those that produce diplopia (double images). Under normal viewing conditions, observers make vergence eye movements to minimize large disparities, and it has been suggested that observers judge depth sign for diplopic stimuli by monitoring the vergence signal. Here we ask if vergence eye movements are required to judge depth order (disparity sign) of diplopic stimuli. We created an open-loop stimulus by presenting stereoscopic afterimages, for which eye movements cannot provide feedback about depth sign or magnitude. We produced afterimages of line stereograms consisting of precision-milled slits in aluminum plates that were back-illuminated by a photographic flash. Each half-image consisted of two thin (1x10mm) vertical slits, positioned above and below a small (1mm) fixation LED. The half-images were viewed through a modified mirror stereoscope, so that the fused image formed two narrow bars in the mid-sagittal plane. On each trial, the upper and lower bars were displaced in depth by one of five equal and opposite disparities (two in the range of fusion, one zero and two that were diplopic). After each presentation, observers (n=15) judged which bar was closer to them. Observers reliably judged the sign of disparity for both diplopic and fused images. We conclude that judgments of disparity sign for diplopic stimuli do not depend on extraretinal information, but are recovered directly from the retinal disparity signal. Meeting abstract presented at VSS 2012\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the allocation of attention in stereoscopic displays.\n \n \n \n \n\n\n \n Carey, A., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 12, pages 216–216. 08 2012.\n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n \n \"On-1\n  \n \n \n \"On-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{carey_allocation_2012,\n\tabstract = {It has been shown that disparity can be used as a token for visual search (possibly pre-attentively). However, there has been no systematic investigation of the distribution of attentional resources across the disparity dimension. Here we evaluated whether position in depth, relative to the screen plane, influences attentional allocation. We conducted two experiments using the same visual search task but with different stimuli. In the first experiment, stimuli consisted of four simple geometric shapes and in the second experiment the stimuli consisted of four orientated lines enclosed by a circle. In both cases, the stimuli were arranged in an annulus about a central fixation marker. On each trial, observers indicated whether the target was present or not within the annular array. The distractor number was varied randomly on each trial (2, 4, 6, or 8) and the target was present on half of the trials. On all trials, one element had a disparity offset by 10 arcmin relative to the others. On half of target present trials the target was in the disparate location, on the remainder it was presented at the distractor disparity. Trials were further subdivided such that on equal numbers of trials the disparate element was on or off the plane of the screen. We measured search time, analysing only trials on which observers responded correctly. Both experiments showed that when the target was the disparate item, reaction time was significantly faster when the target was off the screen plane compared to at the screen plane. This was true for a range of crossed and uncrossed disparities. We conclude that there exists a selective attentional bias for stimuli lying off the screen plane. These data are the first evidence of a disparity-selective attentional bias that is not mediated by relative disparity.\nMeeting abstract presented at {VSS} 2012},\n\tauthor = {Carey, Andrea and Wilcox, Laurie and Allison, Robert},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2012-08-11 12:48:36 +0000},\n\tdate-modified = {2012-08-11 12:48:36 +0000},\n\tdoi = {10.1167/12.9.216},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/394NUPAI/216.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tmonth = 08,\n\tnumber = {9},\n\tpages = {216--216},\n\ttitle = {On the allocation of attention in stereoscopic displays},\n\turl = {http://www.journalofvision.org/content/12/9/216},\n\turl-1 = {http://www.journalofvision.org/content/12/9/216},\n\turl-2 = {http://dx.doi.org/10.1167/12.9.216},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/9/216},\n\turl-2 = {https://doi.org/10.1167/12.9.216}}\n\n
\n
\n\n\n
\n It has been shown that disparity can be used as a token for visual search (possibly pre-attentively). However, there has been no systematic investigation of the distribution of attentional resources across the disparity dimension. Here we evaluated whether position in depth, relative to the screen plane, influences attentional allocation. We conducted two experiments using the same visual search task but with different stimuli. In the first experiment, stimuli consisted of four simple geometric shapes and in the second experiment the stimuli consisted of four orientated lines enclosed by a circle. In both cases, the stimuli were arranged in an annulus about a central fixation marker. On each trial, observers indicated whether the target was present or not within the annular array. The distractor number was varied randomly on each trial (2, 4, 6, or 8) and the target was present on half of the trials. On all trials, one element had a disparity offset by 10 arcmin relative to the others. On half of target present trials the target was in the disparate location, on the remainder it was presented at the distractor disparity. Trials were further subdivided such that on equal numbers of trials the disparate element was on or off the plane of the screen. We measured search time, analysing only trials on which observers responded correctly. Both experiments showed that when the target was the disparate item, reaction time was significantly faster when the target was off the screen plane compared to at the screen plane. This was true for a range of crossed and uncrossed disparities. We conclude that there exists a selective attentional bias for stimuli lying off the screen plane. These data are the first evidence of a disparity-selective attentional bias that is not mediated by relative disparity. Meeting abstract presented at VSS 2012\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Is depth in monocular regions processed by disparity detectors? A computational analysis.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 12, pages 215–215. 08 2012.\n \n\n\n\n
\n\n\n\n \n \n \"IsPaper\n  \n \n \n \"Is-1\n  \n \n \n \"Is-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{tsirlin_is_2012,\n\tabstract = {Depth from binocular disparity relies on finding matching points in the images of the two eyes. However, not all points have a corresponding match since some regions are visible to one eye only. These regions, known as monocular occlusions, play an important role in stereoscopic depth perception supporting both qualitative and quantitative depth percepts. However, it is debated whether these percepts could be signaled by the activity of disparity detectors or require cells specifically tuned to detect monocular occlusions. The goal of the present work is to assess the degree to which model disparity detectors are able to compute the direction and the amount of depth perceived from monocular occlusions. It has been argued that disparity-selective neurons in V1 essentially perform a cross-correlation on the images of the two eyes. Consequently, we have applied a windowed cross-correlation algorithm to several monocular occlusion stimuli presented in the literature (see also Harris \\& Smith, {VSS} 2010). We computed depth maps and correlation profiles and measured the reliability and the strength of the disparity signal generated by cross-correlation. Our results show that although the algorithm is able to predict perceived depth in monocularly occluded regions for some stimuli, it fails to do so for others. Moreover, for virtually all monocularly occluded regions the reliability and the signal strength of depth estimates are low in comparison to estimates made in binocular regions. We also find that depth estimates for monocular areas are highly sensitive to the window size and the range of disparities used to compute the cross-correlation. We conclude that disparity detectors, at least those that perform cross-correlation, cannot account for all instances of depth perceived from monocular occlusions. A more complex mechanism, potentially involving monocular occlusion detectors, is required to account for depth in these stimuli.\nMeeting abstract presented at {VSS} 2012},\n\tauthor = {Tsirlin, Inna and Allison, Robert and Wilcox, Laurie},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2012-08-11 12:48:36 +0000},\n\tdate-modified = {2012-08-11 12:48:36 +0000},\n\tdoi = {10.1167/12.9.215},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/TM4N74ND/215.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tmonth = 08,\n\tnumber = {9},\n\tpages = {215--215},\n\ttitle = {Is depth in monocular regions processed by disparity detectors? A computational analysis.},\n\turl = {http://www.journalofvision.org/content/12/9/215},\n\turl-1 = {http://www.journalofvision.org/content/12/9/215},\n\turl-2 = {http://dx.doi.org/10.1167/12.9.215},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/9/215},\n\turl-2 = {https://doi.org/10.1167/12.9.215}}\n\n
\n
\n\n\n
\n Depth from binocular disparity relies on finding matching points in the images of the two eyes. However, not all points have a corresponding match since some regions are visible to one eye only. These regions, known as monocular occlusions, play an important role in stereoscopic depth perception supporting both qualitative and quantitative depth percepts. However, it is debated whether these percepts could be signaled by the activity of disparity detectors or require cells specifically tuned to detect monocular occlusions. The goal of the present work is to assess the degree to which model disparity detectors are able to compute the direction and the amount of depth perceived from monocular occlusions. It has been argued that disparity-selective neurons in V1 essentially perform a cross-correlation on the images of the two eyes. Consequently, we have applied a windowed cross-correlation algorithm to several monocular occlusion stimuli presented in the literature (see also Harris & Smith, VSS 2010). We computed depth maps and correlation profiles and measured the reliability and the strength of the disparity signal generated by cross-correlation. Our results show that although the algorithm is able to predict perceived depth in monocularly occluded regions for some stimuli, it fails to do so for others. Moreover, for virtually all monocularly occluded regions the reliability and the signal strength of depth estimates are low in comparison to estimates made in binocular regions. We also find that depth estimates for monocular areas are highly sensitive to the window size and the range of disparities used to compute the cross-correlation. We conclude that disparity detectors, at least those that perform cross-correlation, cannot account for all instances of depth perceived from monocular occlusions. A more complex mechanism, potentially involving monocular occlusion detectors, is required to account for depth in these stimuli. Meeting abstract presented at VSS 2012\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vection in depth during treadmill locomotion.\n \n \n \n \n\n\n \n Ash, A., Palmisano, S., & Allison, R.\n\n\n \n\n\n\n In Journal of Vision (VSS Abstract), volume 12, pages 181–181. 08 2012.\n \n\n\n\n
\n\n\n\n \n \n \"VectionPaper\n  \n \n \n \"Vection-1\n  \n \n \n \"Vection-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{ash_vection_2012,\n\tabstract = {The research on vection during treadmill locomotion appears contradictory. For example, Onimaru (2010) reported that walking forwards on a treadmill impaired the vection induced by expanding flow, whereas Seno et al (2011) appeared to find a vection enhancement in these conditions. These previous studies both examined smooth self-motion displays, despite the fact that jittering displays have consistently been shown to improve vection in seated observers. We simulated constant velocity expanding and contracting optic flow displays, in which subjects physical movements were either updated as additional display jitter (synchronised head-display motion) or not updated into the self-motion display. We also varied the display/treadmill forward speed -- these could be simulated at either 4 km/hr or 5 km/hr. Subjects viewed displays in real-time while walking on a treadmill or on the spot and as passive playbacks while stationary. Subjects rated their perceived strength of vection in depth using a joystick (compared to a standard reference stimulus). We found vection impairments for both expanding and contracting optic flow displays and similar impairments when subjects actively walked on the spot. Despite finding a general vection impairment for active walking, faster display/treadmill forward speeds and synchronised head-display jitter improved vection. It was concluded that vection impairments while walking appear to be independent of the display's simulated direction and the nature of one's walking activity.\nMeeting abstract presented at {VSS} 2012},\n\tauthor = {Ash, April and Palmisano, Stephen and Allison, Robert},\n\tbooktitle = {Journal of Vision (VSS Abstract)},\n\tdate-added = {2012-08-11 12:48:36 +0000},\n\tdate-modified = {2012-08-11 12:48:36 +0000},\n\tdoi = {10.1167/12.9.181},\n\tfile = {Snapshot:/Users/allison/Library/Application Support/Firefox/Profiles/xdf9xly7.default/zotero/storage/GKTMX2P9/181.html:text/html},\n\tissn = {, 1534-7362},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = 08,\n\tnumber = {9},\n\tpages = {181--181},\n\ttitle = {Vection in depth during treadmill locomotion},\n\turl = {http://www.journalofvision.org/content/12/9/181},\n\turl-1 = {http://www.journalofvision.org/content/12/9/181},\n\turl-2 = {http://dx.doi.org/10.1167/12.9.181},\n\tvolume = {12},\n\tyear = {2012},\n\turl-1 = {http://www.journalofvision.org/content/12/9/181},\n\turl-2 = {https://doi.org/10.1167/12.9.181}}\n\n
\n
\n\n\n
\n The research on vection during treadmill locomotion appears contradictory. For example, Onimaru (2010) reported that walking forwards on a treadmill impaired the vection induced by expanding flow, whereas Seno et al (2011) appeared to find a vection enhancement in these conditions. These previous studies both examined smooth self-motion displays, despite the fact that jittering displays have consistently been shown to improve vection in seated observers. We simulated constant velocity expanding and contracting optic flow displays, in which subjects physical movements were either updated as additional display jitter (synchronised head-display motion) or not updated into the self-motion display. We also varied the display/treadmill forward speed – these could be simulated at either 4 km/hr or 5 km/hr. Subjects viewed displays in real-time while walking on a treadmill or on the spot and as passive playbacks while stationary. Subjects rated their perceived strength of vection in depth using a joystick (compared to a standard reference stimulus). We found vection impairments for both expanding and contracting optic flow displays and similar impairments when subjects actively walked on the spot. Despite finding a general vection impairment for active walking, faster display/treadmill forward speeds and synchronised head-display jitter improved vection. It was concluded that vection impairments while walking appear to be independent of the display's simulated direction and the nature of one's walking activity. Meeting abstract presented at VSS 2012\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Calibration for High-Definition Camera Rigs with Marker Chessboard.\n \n \n \n \n\n\n \n Chen, J., Benzeroual, K., & Allison, R. S.\n\n\n \n\n\n\n In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3DCine Workshop (CVPRW), pages 29-36, Providence, Rhode Island, 06 2012. \n \n\n\n\n
\n\n\n\n \n \n \"CalibrationPaper\n  \n \n \n \"Calibration-1\n  \n \n \n \"Calibration-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Chen:2012uq,\n\tabstract = {The geometrical calibration of a high-definition camera rig is an important step for 3D film making and computer vision applications. Due to the large amount of image data in high-definition, maintaining execution speeds appropriate for on-set, on-line adjustment procedures is one of the biggest challenges for machine vision based calibration methods. Our aims are to provide a low-cost, fast and accurate system to calibrate both the intrinsic and extrinsic parameters of a stereo camera rig. We first propose a novel calibration target that we call marker chessboard to speed up the corner detection. Then we develop an automatic key frame selection algorithm to optimize frames used in calibration. We also propose a bundle adjustment method to overcome the geometrical inaccuracy of the chessboard. Finally we introduce an online stereo camera calibration system based on the above improvements.\n},\n\taddress = {Providence, Rhode Island},\n\tauthor = {Chen, J. and Benzeroual, K. and Allison, R. S.},\n\tbooktitle = {IEEE Computer Society Conference on Computer Vision and Pattern Recognition, {3DC}ine Workshop ({CVPRW})},\n\tdate-added = {2012-04-30 18:54:41 -0400},\n\tdate-modified = {2014-09-26 02:24:14 +0000},\n\tdoi = {10.1109/CVPRW.2012.6238905},\n\tkeywords = {Stereopsis},\n\tmonth = {06},\n\tpages = {29-36},\n\ttitle = {Calibration for High-Definition Camera Rigs with Marker Chessboard},\n\turl = {http://percept.eecs.yorku.ca/papers/marker chessboard.pdf},\n\turl-1 = {http://dx.doi.org/10.1109/CVPRW.2012.6238905},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/marker%20chessboard.pdf},\n\turl-2 = {https://doi.org/10.1109/CVPRW.2012.6238905}}\n\n
\n
\n\n\n
\n The geometrical calibration of a high-definition camera rig is an important step for 3D film making and computer vision applications. Due to the large amount of image data in high-definition, maintaining execution speeds appropriate for on-set, on-line adjustment procedures is one of the biggest challenges for machine vision based calibration methods. Our aims are to provide a low-cost, fast and accurate system to calibrate both the intrinsic and extrinsic parameters of a stereo camera rig. We first propose a novel calibration target that we call marker chessboard to speed up the corner detection. Then we develop an automatic key frame selection algorithm to optimize frames used in calibration. We also propose a bundle adjustment method to overcome the geometrical inaccuracy of the chessboard. Finally we introduce an online stereo camera calibration system based on the above improvements. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D Display size matters: Compensating for the perceptual effects of S3D display scaling.\n \n \n \n \n\n\n \n Benzeroual, K., Allison, R. S., & Wilcox, L.\n\n\n \n\n\n\n In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3DCine Workshop (CVPRW), pages 45-52, Providence, Rhode Island, 06 2012. \n \n\n\n\n
\n\n\n\n \n \n \"3D-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Benzeroual:2012kx,\n\tabstract = {Over the last decade, advances in technology have made stereoscopic 3D (S3D) displays widely available with an ever-expanding variety of technologies, dimensions, resolution, optimal viewing angle and image quality. Of these, one of the most variable and unpredictable factors influencing the observer's S3D experience is the display size, which ranges from S3D mobile devices to large-format 3D movie theatres. This variety poses a challenge to 3D content makers who wish to preserve the three dimensional artistic context and avoid distortions and artefacts related to scaling. This paper will review the primary human factors issues related to S3D image scaling and the techniques and algorithms used to scale content.\n},\n\taddress = {Providence, Rhode Island},\n\tauthor = {Benzeroual, K. and Allison, R. S. and Wilcox, L.M.},\n\tbooktitle = {IEEE Computer Society Conference on Computer Vision and Pattern Recognition, {3DC}ine Workshop ({CVPRW})},\n\tdate-added = {2012-04-30 18:54:41 -0400},\n\tdate-modified = {2014-09-26 02:24:30 +0000},\n\tdoi = {10.1109/CVPRW.2012.6238907},\n\tkeywords = {Stereopsis},\n\tmonth = {06},\n\tpages = {45-52},\n\ttitle = {3D Display size matters: Compensating for the perceptual effects of {S3D} display scaling},\n\turl-1 = {http://dx.doi.org/10.1109/CVPRW.2012.6238907},\n\tyear = {2012},\n\turl-1 = {https://doi.org/10.1109/CVPRW.2012.6238907}}\n\n
\n
\n\n\n
\n Over the last decade, advances in technology have made stereoscopic 3D (S3D) displays widely available with an ever-expanding variety of technologies, dimensions, resolution, optimal viewing angle and image quality. Of these, one of the most variable and unpredictable factors influencing the observer's S3D experience is the display size, which ranges from S3D mobile devices to large-format 3D movie theatres. This variety poses a challenge to 3D content makers who wish to preserve the three dimensional artistic context and avoid distortions and artefacts related to scaling. This paper will review the primary human factors issues related to S3D image scaling and the techniques and algorithms used to scale content. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion in depth constancy in stereoscopic displays.\n \n \n \n \n\n\n \n Laldin, S., Wilcox, L., Hylton, C., & Allison, R.\n\n\n \n\n\n\n In Electronic Imaging: Stereoscopic Displays and Applications, volume 8288, pages 82880N1-82880N11, 01 2012. SPIE-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n \n \"Motion-1\n  \n \n \n \"Motion-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Laldin:2012vn,\n\tabstract = { In stereoscopic vision, there is non-linear mapping between real space and disparity. In a stereoscopic 3D scene, this non-linear mapping could produce distortions of space when camera geometry differs from natural stereoscopic geometry. When the viewing distance and zero screen parallax setting are held constant and interaxial separation (IA) is varied, there is an asymmetric distortion in the mapping of stereoscopic to real space. If an object traverses this space at constant velocity, one might anticipate distortion of the perceived trajectory. This prediction is based on the premise that when the object traverses compressed space, it should appear to move slower than when it passes through expanded space. In addition, this effect should depend on the saliency of the depth information in the scene.  To determine if the predicted distortions are in fact perceived, we assessed observers' percepts of acceleration and deceleration using an animation of a ball moving in depth through a simulated environment, viewed stereoscopically.\n\nThe method of limits was used to measure transition points between perceived acceleration and deceleration as a function of IA and context (textured vs. non-textured background). Eleven observers with normal binocular vision were tested using four IAs (35, 57.4, 65.7, and 68.21mm). The range of acceleration / deceleration rates presented was selected to bracket the predicted values based on the IA and the viewing geometry. Two environments were used to provide different levels of monocular depth cues, specifically an untextured and a tiled ground plane. For each environment and IA combination, four measures were made of the transition points between perceived acceleration and deceleration. For two of these measures, the series of clips began with an obviously accelerating object and progressed to an obviously decelerating object. The participants' task was to identify the point at which the percept changed from accelerating to decelerating. In the other two measures, the converse procedure was used to identify the deceleration to acceleration transition.\n\nBased on binocular geometry, we predicted that the transition points would shift toward deceleration for small IA and towards acceleration for large IA. This effect should be modulated by monocular depth cues. However, the average transition values were not influenced by IA or the simulated environment. These data suggest that observers are able to discount distortions of stereoscopic space in interpreting the trajectory of objects moving through simple environments. It remains to be seen if velocity constancy will be similarly maintained in more complex scenes or scenes containing multiple moving objects. These results have important implications for the rendering or capture of effective stereoscopic 3D content.},\n\tauthor = {Laldin, S. and Wilcox, L. and Hylton, C. and Allison, R.S.},\n\tbooktitle = {Electronic Imaging: Stereoscopic Displays and Applications},\n\tdate-added = {2011-08-10 13:35:10 -0400},\n\tdate-modified = {2014-09-26 02:09:04 +0000},\n\tdoi = {10.1117/12.910577},\n\tkeywords = {Stereopsis},\n\tmonth = {01},\n\tpages = {82880N1-82880N11},\n\tpublisher = {SPIE-Int Soc Optical Engineering},\n\ttitle = {Motion in depth constancy in stereoscopic displays},\n\turl = {http://percept.eecs.yorku.ca/papers/82880N_1.pdf},\n\turl-1 = {http://dx.doi.org/10.1117/12.910577},\n\tvolume = {8288},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/82880N_1.pdf},\n\turl-2 = {https://doi.org/10.1117/12.910577}}\n\n
\n
\n\n\n
\n In stereoscopic vision, there is non-linear mapping between real space and disparity. In a stereoscopic 3D scene, this non-linear mapping could produce distortions of space when camera geometry differs from natural stereoscopic geometry. When the viewing distance and zero screen parallax setting are held constant and interaxial separation (IA) is varied, there is an asymmetric distortion in the mapping of stereoscopic to real space. If an object traverses this space at constant velocity, one might anticipate distortion of the perceived trajectory. This prediction is based on the premise that when the object traverses compressed space, it should appear to move slower than when it passes through expanded space. In addition, this effect should depend on the saliency of the depth information in the scene. To determine if the predicted distortions are in fact perceived, we assessed observers' percepts of acceleration and deceleration using an animation of a ball moving in depth through a simulated environment, viewed stereoscopically. The method of limits was used to measure transition points between perceived acceleration and deceleration as a function of IA and context (textured vs. non-textured background). Eleven observers with normal binocular vision were tested using four IAs (35, 57.4, 65.7, and 68.21mm). The range of acceleration / deceleration rates presented was selected to bracket the predicted values based on the IA and the viewing geometry. Two environments were used to provide different levels of monocular depth cues, specifically an untextured and a tiled ground plane. For each environment and IA combination, four measures were made of the transition points between perceived acceleration and deceleration. For two of these measures, the series of clips began with an obviously accelerating object and progressed to an obviously decelerating object. The participants' task was to identify the point at which the percept changed from accelerating to decelerating. In the other two measures, the converse procedure was used to identify the deceleration to acceleration transition. Based on binocular geometry, we predicted that the transition points would shift toward deceleration for small IA and towards acceleration for large IA. This effect should be modulated by monocular depth cues. However, the average transition values were not influenced by IA or the simulated environment. These data suggest that observers are able to discount distortions of stereoscopic space in interpreting the trajectory of objects moving through simple environments. It remains to be seen if velocity constancy will be similarly maintained in more complex scenes or scenes containing multiple moving objects. These results have important implications for the rendering or capture of effective stereoscopic 3D content.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Crosstalk reduces the amount of depth seen in 3D images of natural scenes.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In Electronic Imaging: Stereoscopic Displays and Applications, volume 8288, pages 82880W, 1-9, 2012. SPIE-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"CrosstalkPaper\n  \n \n \n \"Crosstalk-1\n  \n \n \n \"Crosstalk-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Tsirlin:2012kx,\n\tabstract = {Crosstalk remains an important determinant of S3D image quality. Defined as the leakage of one eye's image into the image of the other eye crosstalk affects all commercially available stereoscopic viewing systems. It is well established that crosstalk decreases perceived image quality and causes image distortion (Seuntiens et al. 2005, Wilcox & Stewart, 2002). Moreover, visual comfort decreases and perceived workload increases with increasing crosstalk (Kooi and Toet, 2004; Lambooij, 2010; Pala et al. 2007).  In a series of experiments we have shown that crosstalk also affects perceived depth magnitude (Tsirlin et al. 2011a; Tsirlin et al. 2011b). In our previous experiments we used two white bars on a black background, and measured the perceived depth between the bars as a function of disparity and degree of crosstalk.  The data showed that as crosstalk increased perceived depth decreased. This effect was intensified for larger disparities. We found the effect was present regardless of whether the ghost image was spatially separated from, or overlapped with, the original image. The experiments described here extend our previous work to complex images of natural scenes.  We controlled crosstalk levels by simulating them in images presented on a zero-crosstalk mirror stereoscope display. The stimulus was a color image of our laboratory that showed a cluttered scene composed of furniture and objects. The observers were asked to estimate the amount of stereoscopic depth between pairs of objects in the scene. We used two different estimation methods - a virtual measurement scale and a disparity probe. Data show that, as was the case with simple line stimuli, depth in this natural scene was dramatically affected by crosstalk. As crosstalk increased perceived depth decreased; an effect that grew with increasing disparity. Interestingly, observers overestimated the depth in displays that contained no crosstalk. We propose that this overestimation is the result of the presence of pictorial cues to depth (perspective, texture gradients etc.) and familiarity with the real size of the objects depicted in the image. This hypothesis was confirmed by a control experiment where observers estimated depth in the same natural scene presented in 2D instead of S3D. Although there was no stereoscopic depth in this case, observers still perceived some depth between object pairs. Some observers spontaneously reported nausea and headaches after performing the task in S3D, which confirms previous findings that crosstalk causes discomfort in viewers (Kooi and Toet, 2004). Taken together these results show that our previous findings generalize to natural scenes showing that crosstalk affects perceived depth magnitude even in the presence of pictorial depth cues. Our data underscore the fact that crosstalk is a serious challenge to the quality of S3D media and has to be carefully addressed by display manufacturers.\n\n},\n\tauthor = {Tsirlin, I. and Allison, R.S. and Wilcox, L.M.},\n\tbooktitle = {Electronic Imaging: Stereoscopic Displays and Applications},\n\tdate-added = {2011-08-10 13:32:19 -0400},\n\tdate-modified = {2016-01-03 03:23:15 +0000},\n\tdoi = {10.1117/12.906751},\n\tkeywords = {Stereopsis},\n\tpages = {82880W, 1-9},\n\tpublisher = {SPIE-Int Soc Optical Engineering},\n\ttitle = {Crosstalk reduces the amount of depth seen in 3D images of natural scenes},\n\turl = {http://percept.eecs.yorku.ca/papers/82880W_1.pdf},\n\turl-1 = {http://dx.doi.org/10.1117/12.906751},\n\tvolume = {8288},\n\tyear = {2012},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/82880W_1.pdf},\n\turl-2 = {https://doi.org/10.1117/12.906751}}\n\n
\n
\n\n\n
\n Crosstalk remains an important determinant of S3D image quality. Defined as the leakage of one eye's image into the image of the other eye crosstalk affects all commercially available stereoscopic viewing systems. It is well established that crosstalk decreases perceived image quality and causes image distortion (Seuntiens et al. 2005, Wilcox & Stewart, 2002). Moreover, visual comfort decreases and perceived workload increases with increasing crosstalk (Kooi and Toet, 2004; Lambooij, 2010; Pala et al. 2007). In a series of experiments we have shown that crosstalk also affects perceived depth magnitude (Tsirlin et al. 2011a; Tsirlin et al. 2011b). In our previous experiments we used two white bars on a black background, and measured the perceived depth between the bars as a function of disparity and degree of crosstalk. The data showed that as crosstalk increased perceived depth decreased. This effect was intensified for larger disparities. We found the effect was present regardless of whether the ghost image was spatially separated from, or overlapped with, the original image. The experiments described here extend our previous work to complex images of natural scenes. We controlled crosstalk levels by simulating them in images presented on a zero-crosstalk mirror stereoscope display. The stimulus was a color image of our laboratory that showed a cluttered scene composed of furniture and objects. The observers were asked to estimate the amount of stereoscopic depth between pairs of objects in the scene. We used two different estimation methods - a virtual measurement scale and a disparity probe. Data show that, as was the case with simple line stimuli, depth in this natural scene was dramatically affected by crosstalk. As crosstalk increased perceived depth decreased; an effect that grew with increasing disparity. Interestingly, observers overestimated the depth in displays that contained no crosstalk. We propose that this overestimation is the result of the presence of pictorial cues to depth (perspective, texture gradients etc.) and familiarity with the real size of the objects depicted in the image. This hypothesis was confirmed by a control experiment where observers estimated depth in the same natural scene presented in 2D instead of S3D. Although there was no stereoscopic depth in this case, observers still perceived some depth between object pairs. Some observers spontaneously reported nausea and headaches after performing the task in S3D, which confirms previous findings that crosstalk causes discomfort in viewers (Kooi and Toet, 2004). Taken together these results show that our previous findings generalize to natural scenes showing that crosstalk affects perceived depth magnitude even in the presence of pictorial depth cues. Our data underscore the fact that crosstalk is a serious challenge to the quality of S3D media and has to be carefully addressed by display manufacturers. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Visual Perception of Smooth and Perturbed Self-Motion in Microgravity, Final Report.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n Technical Report Contract no. 9F007-091472, Canadian Space Agency Space Exploration Projects, 2012.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Allison:2012CSA,\n\tauthor = {Allison, R.S.},\n\tdate-added = {2013-04-05 16:53:19 +0000},\n\tdate-modified = {2013-10-06 21:10:52 +0000},\n\tinstitution = {Canadian Space Agency Space Exploration Projects},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {Contract no. 9F007-091472},\n\ttitle = {Visual Perception of Smooth and Perturbed Self-Motion in Microgravity, Final Report},\n\tyear = {2012}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n SeeJitter Experiment Requirements Document (ERD).\n \n \n \n\n\n \n Ruel, S., & Allison, R.\n\n\n \n\n\n\n Technical Report CSA-SEEJITTER-RD-0001, Canadian Space Agency Space Exploration Projects, 2012.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Stefanie:2012uq,\n\tauthor = {Ruel, S. and Allison, R.S.},\n\tdate-added = {2013-01-01 16:43:51 +0000},\n\tdate-modified = {2013-01-01 16:46:24 +0000},\n\tinstitution = {Canadian Space Agency Space Exploration Projects},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {CSA-SEEJITTER-RD-0001},\n\ttitle = {SeeJitter Experiment Requirements Document (ERD)},\n\tyear = {2012}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2011\n \n \n (5)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Disparity biasing in depth from monocular occlusions.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R. S.\n\n\n \n\n\n\n Vision Research, 51(14): 1699–1711. 07 2011.\n \n\n\n\n
\n\n\n\n \n \n \"DisparityPaper\n  \n \n \n \"Disparity-1\n  \n \n \n \"Disparity-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{tsirlin_disparity_2011,\n\tabstract = {Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity.},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie M. and Allison, Robert S.},\n\tdate-added = {2011-08-01 19:29:50 -0400},\n\tdate-modified = {2014-09-26 01:54:13 +0000},\n\tdoi = {16/j.visres.2011.05.012},\n\tissn = {0042-6989},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tmonth = 07,\n\tnumber = {14},\n\tpages = {1699--1711},\n\ttitle = {Disparity biasing in depth from monocular occlusions},\n\turl = {http://percept.eecs.yorku.ca/papers/disparity bias.pdf},\n\turl-1 = {http://dx.doi.org/16/j.visres.2011.05.012},\n\tvolume = {51},\n\tyear = {2011},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/disparity%20bias.pdf},\n\turl-2 = {https://doi.org/16/j.visres.2011.05.012}}\n\n
\n
\n\n\n
\n Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Drawing with divergent perspective, ancient and modern.\n \n \n \n \n\n\n \n Howard, I., & Allison, R.\n\n\n \n\n\n\n Perception, 40(9): 1017-1033. 2011.\n \n\n\n\n
\n\n\n\n \n \n \"DrawingPaper\n  \n \n \n \"Drawing-1\n  \n \n \n \"Drawing-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Howard:2011qy,\n\tabstract = {Before methods for drawing accurately in perspective were developed in the 15th century, many artists drew with divergent perspective. But we found that many university students draw with divergent perspective rather than with the correct convergent perspective. These experiments were designed to reveal why people tend to draw with divergent perspective. University students drew a cube and isolated edges and surfaces of a cube. Their drawings were very inaccurate. About half the students drew with divergent perspective like artists before the 15th century. Students selected a cube from a set of tapered boxes with great accuracy and were reasonably accurate in selecting the correct drawing of a cube from a set of tapered drawings. Each subject's drawing was much worse than the drawing selected as accurate. An analysis of errors in drawings of a cube and of isolated edges and surfaces of a cube revealed several factors that predispose people to draw in divergent perspective. The way these factors intrude depends on the order in which the edges of the cube are drawn. },\n\tauthor = {Howard, I.P. and Allison, R.S.},\n\tdate-added = {2011-05-22 22:06:30 -0400},\n\tdate-modified = {2014-09-26 01:56:17 +0000},\n\tdoi = {10.1068/p6876},\n\tjournal = {Perception},\n\tkeywords = {Depth perception},\n\tnumber = {9},\n\tpages = {1017-1033},\n\ttitle = {Drawing with divergent perspective, ancient and modern},\n\turl = {http://percept.eecs.yorku.ca/papers/drawing in perspective.pdf},\n\turl-1 = {http://dx.doi.org/10.1068/p6876},\n\tvolume = {40},\n\tyear = {2011},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/drawing%20in%20perspective.pdf},\n\turl-2 = {https://doi.org/10.1068/p6876}}\n\n
\n
\n\n\n
\n Before methods for drawing accurately in perspective were developed in the 15th century, many artists drew with divergent perspective. But we found that many university students draw with divergent perspective rather than with the correct convergent perspective. These experiments were designed to reveal why people tend to draw with divergent perspective. University students drew a cube and isolated edges and surfaces of a cube. Their drawings were very inaccurate. About half the students drew with divergent perspective like artists before the 15th century. Students selected a cube from a set of tapered boxes with great accuracy and were reasonably accurate in selecting the correct drawing of a cube from a set of tapered drawings. Each subject's drawing was much worse than the drawing selected as accurate. An analysis of errors in drawings of a cube and of isolated edges and surfaces of a cube revealed several factors that predispose people to draw in divergent perspective. The way these factors intrude depends on the order in which the edges of the cube are drawn. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Jitter Shakes Conflict Accounts of Self-motion Perception.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., Kim, J., & Bonato, F.\n\n\n \n\n\n\n Seeing and Perceiving, 24: 173-200. 2011.\n \n\n\n\n
\n\n\n\n \n \n \"VisualPaper\n  \n \n \n \"Visual-1\n  \n \n \n \"Visual-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:2011lr,\n\tabstract = {Sensory conflict has been used to explain the way we perceive and control our self-motion, as well as the aetiology of motion sickness. However, recent research on simulated viewpoint jitter provides a strong challenge to one core prediction of these theories --- that increasing sensory conflict should always impair visually induced illusions of self-motion (known as vection). These studies show that jittering self-motion displays (thought to generate significant and sustained visual-vestibular conflict) actually induce superior vection to comparable non-jittering displays (thought to generate only minimal/transient sensory conflict). Here we review viewpoint jitter effects on vection, postural sway, eye-movements and motion sickness, and relate them to recent behavioural and neurophysiological findings. It is shown that jitter research provides important insights into the role that sensory interaction plays in self-motion perception. },\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Kim, J. and Bonato, F.},\n\tdate-added = {2011-05-22 21:11:25 -0400},\n\tdate-modified = {2014-09-26 01:58:53 +0000},\n\tdoi = {10.1163/187847511X570817},\n\tjournal = {Seeing and Perceiving},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {173-200},\n\ttitle = {Visual Jitter Shakes Conflict Accounts of Self-motion Perception},\n\turl = {http://percept.eecs.yorku.ca/papers/jitter review.pdf},\n\turl-1 = {http://dx.doi.org/10.1163/187847511X570817},\n\tvolume = {24},\n\tyear = {2011},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/jitter%20review.pdf},\n\turl-2 = {https://doi.org/10.1163/187847511X570817}}\n\n
\n
\n\n\n
\n Sensory conflict has been used to explain the way we perceive and control our self-motion, as well as the aetiology of motion sickness. However, recent research on simulated viewpoint jitter provides a strong challenge to one core prediction of these theories — that increasing sensory conflict should always impair visually induced illusions of self-motion (known as vection). These studies show that jittering self-motion displays (thought to generate significant and sustained visual-vestibular conflict) actually induce superior vection to comparable non-jittering displays (thought to generate only minimal/transient sensory conflict). Here we review viewpoint jitter effects on vection, postural sway, eye-movements and motion sickness, and relate them to recent behavioural and neurophysiological findings. It is shown that jitter research provides important insights into the role that sensory interaction plays in self-motion perception. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of crosstalk on the perceived depth from disparity and monocular occlusions.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n IEEE Transactions on Broadcasting, 57(2): 445-453. 2011.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Tsirlin:2011mf,\n\tabstract = {Crosstalk in stereoscopic displays is defined as the leakage of one eye's image into the image of the other eye. All popular commercial stereoscopic systems suffer from crosstalk to some extent. Studies show that crosstalk causes distortions, reduces image quality and visual comfort, and increases perceived workload. Moreover, there is evidence that crosstalk effects depth perception from disparity. In the present paper we present two experiments. The first addresses the effect of crosstalk on the perceived magnitude of depth from disparity. The second examines the effect of crosstalk on the magnitude of depth perceived from monocular occlusions. Our data show that crosstalk has a detrimental effect on depth perceived from both cues, but it has a stronger effect on depth from monocular occlusions. Our findings taken together with previous results suggest that crosstalk, even in modest amounts, noticeably degrades the quality of stereoscopic images.},\n\tauthor = {Tsirlin, I. and Wilcox, L.M. and Allison, R.S.},\n\tdate-added = {2011-05-09 12:26:50 -0400},\n\tdate-modified = {2012-07-02 13:42:36 -0400},\n\tdoi = {10.1109/TBC.2011.2105630},\n\tissn = {0018-9316},\n\tjournal = {IEEE Transactions on Broadcasting},\n\tkeywords = {Stereopsis},\n\tnumber = {2},\n\tpages = {445-453},\n\ttitle = {The effect of crosstalk on the perceived depth from disparity and monocular occlusions},\n\turl-1 = {http://dx.doi.org/10.1109/TBC.2011.2105630},\n\turl-2 = {http://dx.doi.org/10.1109/TBC.2011.2105630},\n\tvolume = {57},\n\tyear = {2011},\n\turl-1 = {https://doi.org/10.1109/TBC.2011.2105630}}\n\n
\n
\n\n\n
\n Crosstalk in stereoscopic displays is defined as the leakage of one eye's image into the image of the other eye. All popular commercial stereoscopic systems suffer from crosstalk to some extent. Studies show that crosstalk causes distortions, reduces image quality and visual comfort, and increases perceived workload. Moreover, there is evidence that crosstalk effects depth perception from disparity. In the present paper we present two experiments. The first addresses the effect of crosstalk on the perceived magnitude of depth from disparity. The second examines the effect of crosstalk on the magnitude of depth perceived from monocular occlusions. Our data show that crosstalk has a detrimental effect on depth perceived from both cues, but it has a stronger effect on depth from monocular occlusions. Our findings taken together with previous results suggest that crosstalk, even in modest amounts, noticeably degrades the quality of stereoscopic images.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n .\n \n \n \n \n\n\n \n Allison, R., & Howard, I.\n\n\n \n\n\n\n Stereoscopic Motion in Depth, pages 163-186. Harris, L., & Jenkin, M., editor(s). Cambridge University Press, Cambridge UK, 2011.\n \n\n\n\n
\n\n\n\n \n \n \"StereoscopicPaper\n  \n \n \n \"Stereoscopic-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{Allison:2010gc,\n\tabstract = {This chapter is a review of stereoscopic processes involved in the perception of motion in depth. We will first discuss mechanisms that could be used to process changing disparity signals to motion in depth. We will then review the evidence, some which has not been published previously, concerning which of these mechanisms is used by the visual system},\n\taddress = {Cambridge UK},\n\tauthor = {Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Vision in 3{D} environments},\n\tdate-added = {2011-05-06 11:00:25 -0400},\n\tdate-modified = {2014-09-26 02:39:36 +0000},\n\teditor = {L. Harris and M. Jenkin},\n\tkeywords = {Motion in depth},\n\tpages = {163-186},\n\tpublisher = {Cambridge University Press},\n\ttitle = {Stereoscopic Motion in Depth},\n\turl = {http://percept.eecs.yorku.ca/papers/cvr motion in depth chapter submit.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/cvr%20motion%20in%20depth%20chapter%20submit.pdf},\n\tyear = {2011},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/cvr%20motion%20in%20depth%20chapter%20submit.pdf}}\n\n
\n
\n\n\n
\n This chapter is a review of stereoscopic processes involved in the perception of motion in depth. We will first discuss mechanisms that could be used to process changing disparity signals to motion in depth. We will then review the evidence, some which has not been published previously, concerning which of these mechanisms is used by the visual system\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (13)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of Realistic Simulated Linear and Rotary Viewpoint Jitter on Vection.\n \n \n \n\n\n \n Govan, D., Palmisano, S. A., Allison, R. S., & Field, M.\n\n\n \n\n\n\n In 38th Australasian Experimental Psychology Conference. 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Govan:2011zr,\n\tauthor = {Govan, D. and Palmisano, S. A. and Allison, R. S. and Field, M.},\n\tbooktitle = {38th Australasian Experimental Psychology Conference},\n\tdate-added = {2012-08-13 20:07:23 +0000},\n\tdate-modified = {2012-08-13 20:07:23 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Effects of Realistic Simulated Linear and Rotary Viewpoint Jitter on Vection},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual Artifacts in Stereoscopic 3D Film.\n \n \n \n \n\n\n \n Allison, R. S.\n\n\n \n\n\n\n In Toronto International Stereoscopic 3D Conference. 06 2011.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2011fk,\n\tabstract = {The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. Capture, post-production and exhibition of stereoscopic content is subject to a number of artefacts and imperfections that impact the viewer experience. This talk will discuss a variety of these issues and their implications for depth perception, visual comfort and sense of scale. },\n\tauthor = {Robert S. Allison},\n\tbooktitle = {Toronto International Stereoscopic 3D Conference},\n\tdate-added = {2012-08-13 19:35:44 +0000},\n\tdate-modified = {2012-08-13 19:39:28 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {06},\n\ttitle = {Perceptual Artifacts in Stereoscopic 3D Film},\n\turl-1 = {http://www.etcenter.org/2011/04/the-smpte-second-annual-international-conference-on-stereoscopic-3d-for-media-entertainment/},\n\tyear = {2011}}\n\n
\n
\n\n\n
\n The renaissance of stereoscopic three-dimensional (S3D) film requires that film-makers revisit assumptions and conventions about factors that influence the visual appreciation and impact of their medium. Capture, post-production and exhibition of stereoscopic content is subject to a number of artefacts and imperfections that impact the viewer experience. This talk will discuss a variety of these issues and their implications for depth perception, visual comfort and sense of scale. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Depth cue interactions in stereoscopic 3D media.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n In SMPTE International Conference on Stereoscopic 3D for Media and Entertainment. 06 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2011uq,\n\tabstract = {Stereoscopic display adds a compelling tool to the arsenal of techniques that artists can use to create the sense of three-dimensional space in film and other media. In stereo media, as in the real world, people combine the cues to depth to form a coherent perception of the 3D environment. In S3D media, depth cues do not typically correspond to what the viewer would experience in a given scene and are also not in agreement with each other. I will review what vision science tells us about how depth cues are integrated and what happens when they conflict. I will also discuss the role of cue interaction in choosing the configuration of rigs and displays, and how cue interactions create common distortions experienced in S3D media.},\n\tannote = {June 21-22 in New York},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {{SMPTE} International Conference on Stereoscopic 3D for Media and Entertainment},\n\tdate-added = {2011-08-10 13:25:07 -0400},\n\tdate-modified = {2012-07-03 00:00:33 -0400},\n\tkeywords = {Stereopsis},\n\tmonth = {06},\n\ttitle = {Depth cue interactions in stereoscopic 3D media},\n\tyear = {2011}}\n\n
\n
\n\n\n
\n Stereoscopic display adds a compelling tool to the arsenal of techniques that artists can use to create the sense of three-dimensional space in film and other media. In stereo media, as in the real world, people combine the cues to depth to form a coherent perception of the 3D environment. In S3D media, depth cues do not typically correspond to what the viewer would experience in a given scene and are also not in agreement with each other. I will review what vision science tells us about how depth cues are integrated and what happens when they conflict. I will also discuss the role of cue interaction in choosing the configuration of rigs and displays, and how cue interactions create common distortions experienced in S3D media.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of interocular separation on perceived depth from disparity in complex scenes.\n \n \n \n \n\n\n \n Benzeroual, K., Laldin, S., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In The 34th European Conference on Visual Perception, Perception, volume 40, pages 104. Toulouse, France, August 28- Sept. 1 2011.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Benzeroual:2011fk,\n\tabstract = {The geometry of stereopsis makes straightforward predictions regarding the effect of increasing an observer's simulated interocular distance (IO) on perceived depth. Our aim is to characterize the effect of IO on perceived depth, and its dependence on scene complexity and screen size. In Experiment 1 we used S3D movies of an indoor scene, shot with three camera separations (0.25'', 1'' and 1.7''). We displayed this footage on two screens (54'' and 22'') maintaining a constant visual angle. A reference scene with an IO of 1'' was displayed for 5s followed by the test scene. Participants (n=10) were asked to estimate the distances between four pairs of objects in the scene relative to the reference. Contrary to expectations, there was no consistent effect of IO, and all participants perceived more depth on the smaller screen. In Experiment 2 we used static line stimuli, with no real-world context. The same set of conditions was evaluated; all observers now perceived more depth in the larger display and there was a clear dependence on IO. The presence of multiple realistic depth cues has significant and complex effects on perceived depth from binocular disparity; effects that are not obvious from binocular geometry.},\n\taddress = {Toulouse, France},\n\tauthor = {Benzeroual, K. and Laldin, S. and Allison, R.S. and Wilcox, L.M.},\n\tbooktitle = {The 34th European Conference on Visual Perception, Perception},\n\tdate-added = {2011-08-10 11:44:04 -0400},\n\tdate-modified = {2014-09-09 19:16:31 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {August 28- Sept. 1},\n\tnumber = {ECVP Abstract Supplement},\n\tpages = {104},\n\ttitle = {The effect of interocular separation on perceived depth from disparity in complex scenes},\n\turl = {http://www.perceptionweb.com/abstract.cgi?id=v110454},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v110454},\n\tvolume = {40},\n\tyear = {2011},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v110454}}\n\n
\n
\n\n\n
\n The geometry of stereopsis makes straightforward predictions regarding the effect of increasing an observer's simulated interocular distance (IO) on perceived depth. Our aim is to characterize the effect of IO on perceived depth, and its dependence on scene complexity and screen size. In Experiment 1 we used S3D movies of an indoor scene, shot with three camera separations (0.25'', 1'' and 1.7''). We displayed this footage on two screens (54'' and 22'') maintaining a constant visual angle. A reference scene with an IO of 1'' was displayed for 5s followed by the test scene. Participants (n=10) were asked to estimate the distances between four pairs of objects in the scene relative to the reference. Contrary to expectations, there was no consistent effect of IO, and all participants perceived more depth on the smaller screen. In Experiment 2 we used static line stimuli, with no real-world context. The same set of conditions was evaluated; all observers now perceived more depth in the larger display and there was a clear dependence on IO. The presence of multiple realistic depth cues has significant and complex effects on perceived depth from binocular disparity; effects that are not obvious from binocular geometry.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Early Fire Detection: The FireHawk System.\n \n \n \n\n\n \n Milner, A., & Allison, R. S.\n\n\n \n\n\n\n In CVR 2011: Plastic Vision, pages G10. June 15-18 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Milner:2011vn,\n\tauthor = {Andrew Milner and Robert S. Allison},\n\tbooktitle = {CVR 2011: Plastic Vision},\n\tdate-added = {2011-06-15 15:12:15 -0400},\n\tdate-modified = {2012-07-02 22:45:51 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {June 15-18},\n\torganization = {Centre for Vision Research},\n\tpages = {G10},\n\ttitle = {Early Fire Detection: The FireHawk System},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perceptual effects of Geometric Parameters while viewing complex S3D Scenes.\n \n \n \n\n\n \n Laldin, S., Benzeroual, K., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In CVR 2011: Plastic Vision, pages C6. 06 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Laldin:2011uq,\n\tauthor = {Laldin, S.R. and Benzeroual, K. and Allison, R.S. and Wilcox, L.M.},\n\tbooktitle = {CVR 2011: Plastic Vision},\n\tdate-added = {2011-06-15 15:05:26 -0400},\n\tdate-modified = {2011-06-15 15:09:51 -0400},\n\tkeywords = {Depth perception},\n\tmonth = {06},\n\tpages = {C6},\n\ttitle = {Perceptual effects of Geometric Parameters while viewing complex {S3D} Scenes},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The role of body posture in the perception of self-motion.\n \n \n \n\n\n \n Guterman, P. S., Allison, R. S., Zacher, J. E., & Palmisano, S. A.\n\n\n \n\n\n\n In CVR 2011: Plastic Vision, pages G11. June 15-18 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2011kx,\n\tauthor = {Pearl S. Guterman and Robert S. Allison and James E. Zacher and Stephen A. Palmisano},\n\tbooktitle = {CVR 2011: Plastic Vision},\n\tdate-added = {2011-06-05 21:40:57 -0400},\n\tdate-modified = {2011-06-15 15:12:34 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {June 15-18},\n\torganization = {Centre for Vision Research},\n\tpages = {G11},\n\ttitle = {The role of body posture in the perception of self-motion},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Infrared Based Near Triad Tracking System.\n \n \n \n\n\n \n Bogdan, N., Allison, R., & Suryakumar, R.\n\n\n \n\n\n\n In CVR 2011: Plastic Vision, pages G1. June 15-18 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bogdan2011,\n\tauthor = {Bogdan, N. and Allison, R.S. and Suryakumar, R.},\n\tbooktitle = {CVR 2011: Plastic Vision},\n\tdate-added = {2011-05-11 11:27:28 -0400},\n\tdate-modified = {2011-06-15 15:11:37 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {June 15-18},\n\torganization = {Centre for Vision Research},\n\tpages = {G1},\n\ttitle = {Infrared Based Near Triad Tracking System},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Gaze-Contingent Real-Time Visual Field Simulations.\n \n \n \n\n\n \n Vinnikov, M., & Allison, R.\n\n\n \n\n\n\n In CVR 2011: Plastic Vision, pages E9. June 15-18 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:2011yi,\n\tauthor = {Vinnikov, M. and Allison, R.S.},\n\tbooktitle = {CVR 2011: Plastic Vision},\n\tdate-added = {2011-05-11 11:25:46 -0400},\n\tdate-modified = {2011-06-15 19:10:24 +0000},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {June 15-18},\n\tpages = {E9},\n\ttitle = {Gaze-Contingent Real-Time Visual Field Simulations},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Enhancements of Vection in Depth from Viewpoint Oscillation: Effects of Field of View, Amplitude, Focal Distance and Body Posture.\n \n \n \n\n\n \n Zacher, J., Guterman, P., Palmisano, S., & Allison, R.\n\n\n \n\n\n\n In Journal of Vestibular Research (8th Symposium on the Role of the Vestibular Organs in Space Exploration), volume 21, pages 82. Houston, Texas, 04 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zacher2011,\n\tabstract = {Incorporating jitter or oscillation of the vantage point in visual displays produces more compelling illusions of selfmotion (vection), despite generating greater sensory conflicts [1]. We are working with the Canadian Space Agency to develop an experiment to study this phenomenon on the International Space Station. Pragmatic issues favour small, near displays rather than typical immersive displays. This paper studies impact of display characteristics on the jitter/oscillation enhancement on vection.\n\nMETHODS\n\nVisual displays simulated constant velocity forward motion at 1.33 m/s through a virtual world, or the same motion with simulated viewpoint oscillation, on a laptop monitor viewed through an aperture. Various experiments examined the effect of oscillation amplitude, direction, field of view (with a different monitor), focal distance and body posture on vection responses.\n\nRESULTS\n\nAdding simulated horizontal or vertical viewpoint oscillation to radial flow increased vection a similar amount. Vection strength was increased more for oscillation peak velocities of 0.28 m/s compared to 0.09 m/s. Increasing focal distance by the use of +2D ophthalmic lenses did not measurably impact reported strength of vection. While field of view had no effect, closer viewing distances reduced vection but had no significant effect on the oscillation enhancement.\n\nDISCUSSION\n\nMotion sickness and spatial disorientation continue to impact the availability and effectiveness of astronauts. The current results will guide the development of ISS studies to improve our understanding of how vestibular and visual signals are recalibrated in altered gravity.\n\nREFERENCES\n[1] Palmisano, S., Allison, R.S. and Pekin. (2008) Perception, 37, 22 -- 33.},\n\taddress = {Houston, Texas},\n\tauthor = {Zacher, J.E. and Guterman, P.S. and Palmisano, S.A. and Allison, R.S.},\n\tbooktitle = {Journal of Vestibular Research (8th Symposium on the Role of the Vestibular Organs in Space Exploration)},\n\tdate-added = {2011-05-11 11:20:56 -0400},\n\tdate-modified = {2011-10-28 21:44:49 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {04},\n\tpages = {82},\n\ttitle = {Enhancements of Vection in Depth from Viewpoint Oscillation: Effects of Field of View, Amplitude, Focal Distance and Body Posture},\n\tvolume = {21},\n\tyear = {2011}}\n\n
\n
\n\n\n
\n Incorporating jitter or oscillation of the vantage point in visual displays produces more compelling illusions of selfmotion (vection), despite generating greater sensory conflicts [1]. We are working with the Canadian Space Agency to develop an experiment to study this phenomenon on the International Space Station. Pragmatic issues favour small, near displays rather than typical immersive displays. This paper studies impact of display characteristics on the jitter/oscillation enhancement on vection. METHODS Visual displays simulated constant velocity forward motion at 1.33 m/s through a virtual world, or the same motion with simulated viewpoint oscillation, on a laptop monitor viewed through an aperture. Various experiments examined the effect of oscillation amplitude, direction, field of view (with a different monitor), focal distance and body posture on vection responses. RESULTS Adding simulated horizontal or vertical viewpoint oscillation to radial flow increased vection a similar amount. Vection strength was increased more for oscillation peak velocities of 0.28 m/s compared to 0.09 m/s. Increasing focal distance by the use of +2D ophthalmic lenses did not measurably impact reported strength of vection. While field of view had no effect, closer viewing distances reduced vection but had no significant effect on the oscillation enhancement. DISCUSSION Motion sickness and spatial disorientation continue to impact the availability and effectiveness of astronauts. The current results will guide the development of ISS studies to improve our understanding of how vestibular and visual signals are recalibrated in altered gravity. REFERENCES [1] Palmisano, S., Allison, R.S. and Pekin. (2008) Perception, 37, 22 – 33.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Gaze Contingent Real-Time Visual Simulations.\n \n \n \n\n\n \n Vinnikov, M., & Allison, R.\n\n\n \n\n\n\n In 1st IEEE Canada Women in Engineering National Conference. 04 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:2011ec,\n\tauthor = {Vinnikov, M. and Allison, R.S.},\n\tbooktitle = {1st IEEE Canada Women in Engineering National Conference},\n\tdate-added = {2011-05-11 11:18:00 -0400},\n\tdate-modified = {2011-05-18 15:57:25 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {04},\n\ttitle = {Gaze Contingent Real-Time Visual Simulations},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The use of egocentric direction and optic flow in the visual guidance of walking.\n \n \n \n\n\n \n Rushton, S., Herlihey, T., & Allison, R.\n\n\n \n\n\n\n In AVA/BMVA Meeting on Biological and Computer Vision. 05 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Rushton:2011ht,\n\tabstract = {How do humans visually guide themselves towards a target?  The traditional account (eg Gibson J J, 1966, The Senses Considered as Perceptual Systems (Boston:Houghton-Mifflin, Boston); Warren W H, Hannon D J, 1988, Nature, 336, 162-163.) was based on the use of optic flow: the observer regulates his or her direction of walking so as to align the focus of expansion with the target object.  Work over the past 13 years points to use of a different strategy: the observer keeps the target perceptually straight-ahead (Rushton S K et al, 1998, Current Biology, 8, 1191-1194).  The information required to keep an object straight-ahead is the current direction of an object relative to the body, its ``egocentric direction''.  Although egocentric direction is a very simple source of information, modelling shows that it allows for quite sophisticated locomotor behaviour.   What of the use of optic flow?  Our recent work has shown that it plays an important role in maintaining calibration.  Egocentric direction is derived in part from eye orientation and head orientation signals.  These signals are prone to drift.  It appears the brain keeps perception of egocentric direction calibrated by comparing predictions of the optic flow that will result from any given walking movement against the optic flow that actually results.  Any discrepancy then drives a recalibration process.  Thus optic flow does contribute to the visual guidance of walking, but indirectly through the recalibration of egocentric direction.},\n\tauthor = {Rushton, S.K. and Herlihey, T.A. and Allison, R.S.},\n\tbooktitle = {AVA/BMVA Meeting on Biological and Computer Vision},\n\tdate-added = {2011-05-11 11:15:14 -0400},\n\tdate-modified = {2011-05-24 10:07:39 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {05},\n\torganization = {School of Psychology, Cardiff University},\n\ttitle = {The use of egocentric direction and optic flow in the visual guidance of walking},\n\tyear = {2011}}\n\n
\n
\n\n\n
\n How do humans visually guide themselves towards a target? The traditional account (eg Gibson J J, 1966, The Senses Considered as Perceptual Systems (Boston:Houghton-Mifflin, Boston); Warren W H, Hannon D J, 1988, Nature, 336, 162-163.) was based on the use of optic flow: the observer regulates his or her direction of walking so as to align the focus of expansion with the target object. Work over the past 13 years points to use of a different strategy: the observer keeps the target perceptually straight-ahead (Rushton S K et al, 1998, Current Biology, 8, 1191-1194). The information required to keep an object straight-ahead is the current direction of an object relative to the body, its ``egocentric direction''. Although egocentric direction is a very simple source of information, modelling shows that it allows for quite sophisticated locomotor behaviour. What of the use of optic flow? Our recent work has shown that it plays an important role in maintaining calibration. Egocentric direction is derived in part from eye orientation and head orientation signals. These signals are prone to drift. It appears the brain keeps perception of egocentric direction calibrated by comparing predictions of the optic flow that will result from any given walking movement against the optic flow that actually results. Any discrepancy then drives a recalibration process. Thus optic flow does contribute to the visual guidance of walking, but indirectly through the recalibration of egocentric direction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decoding da Vinci: quantitative depth from monocular occlusions.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 11, pages 337. 2011.\n \n\n\n\n
\n\n\n\n \n \n \"Decoding-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tsirlin:2011tb,\n\tabstract = {Nakayama and Shimojo (1990) demonstrated that quantitative depth percepts could be generated by monocular occlusions, a phenomenon they called da Vinci stereopsis. They used a configuration where a monocular bar was placed to one side of a binocular rectangle. When an occlusion interpretation was possible, the bar appeared behind the rectangle at a distance that increased as the lateral separation between the bar and the rectangle increased. Gillam, Cook and Blackburn (2003) argued that quantitative depth perception in da Vinci stereopsis was due to double-matching of the bar with the edge of the rectangle. They showed that when the monocular bar was replaced with a monocular dot only qualitative depth percepts remained. However, their stimulus differed from the original in ways that promoted double-matching and the range of separations of the monocular feature from the rectangle was different for the bar and the dot. To evaluate the contributions of monocular occlusions and double-matching to quantitative depth percepts in da Vinci arrangements, we have replicated and extended the Nakayama and Shimojo and Gillam et al. experiments. We reproduced the original stimuli precisely and used the same range of separations for the bar stimuli as for the dot stimuli. We also compared perceived depth from disparity in the bar and dot stimuli when they were presented binocularly. Three of six observers were able to see quantitative depth with the dot stimulus though less depth was perceived than when a monocular bar was used. Interestingly, we found a similar difference in perceived depth when the bar and the dot were presented binocularly. Taken together our results provide evidence that quantitative depth in da Vinci arrangements is based, at least in part, on monocular occlusions, and that this phenomenon depends on the properties of the monocular object and is subject to inter-observer differences.},\n\tauthor = {Tsirlin, I. and Wilcox, L. M. and Allison, R.S.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2011-05-11 11:10:54 -0400},\n\tdate-modified = {2012-07-02 19:00:04 -0400},\n\tdoi = {10.1167/11.11.337},\n\tkeywords = {Stereopsis},\n\tnumber = {11},\n\torganization = {Vision Sciences Society},\n\tpages = {337},\n\ttitle = {Decoding da Vinci: quantitative depth from monocular occlusions.},\n\turl-1 = {http://dx.doi.org/10.1167/11.11.337},\n\tvolume = {11},\n\tyear = {2011},\n\turl-1 = {https://doi.org/10.1167/11.11.337}}\n\n
\n
\n\n\n
\n Nakayama and Shimojo (1990) demonstrated that quantitative depth percepts could be generated by monocular occlusions, a phenomenon they called da Vinci stereopsis. They used a configuration where a monocular bar was placed to one side of a binocular rectangle. When an occlusion interpretation was possible, the bar appeared behind the rectangle at a distance that increased as the lateral separation between the bar and the rectangle increased. Gillam, Cook and Blackburn (2003) argued that quantitative depth perception in da Vinci stereopsis was due to double-matching of the bar with the edge of the rectangle. They showed that when the monocular bar was replaced with a monocular dot only qualitative depth percepts remained. However, their stimulus differed from the original in ways that promoted double-matching and the range of separations of the monocular feature from the rectangle was different for the bar and the dot. To evaluate the contributions of monocular occlusions and double-matching to quantitative depth percepts in da Vinci arrangements, we have replicated and extended the Nakayama and Shimojo and Gillam et al. experiments. We reproduced the original stimuli precisely and used the same range of separations for the bar stimuli as for the dot stimuli. We also compared perceived depth from disparity in the bar and dot stimuli when they were presented binocularly. Three of six observers were able to see quantitative depth with the dot stimulus though less depth was perceived than when a monocular bar was used. Interestingly, we found a similar difference in perceived depth when the bar and the dot were presented binocularly. Taken together our results provide evidence that quantitative depth in da Vinci arrangements is based, at least in part, on monocular occlusions, and that this phenomenon depends on the properties of the monocular object and is subject to inter-observer differences.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n On the Distinction between Perceived and Predicted Depth in S3D Films.\n \n \n \n \n\n\n \n Benzeroual, K., Wilcox, L., Kazimi, A., & Allison, R. S.\n\n\n \n\n\n\n In IC3D 2011, pages 59.1-59.8, Liege, Belgium, 12 2011. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n \n \"On-1\n  \n \n \n \"On-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Benzeroual:2011uq,\n\tabstract = {A primary concern when making stereoscopic 3D (S3D) movies is to promote an effective and comfortable S3D experience for the audience when displayed on the screen. The amount of depth produced on-screen can be controlled using a variety of parameters. Many of these are lighting related such as lighting architecture and technology. Others are optical or positional and thus have a geometrical effect including camera interaxial distance, camera convergence, lens properties, viewing distance and angle, screen/projector properties and viewer anatomy (interocular distance). The amount of estimated depth from disparity alone can be precisely predicted from simple trigonometry; however, perceived depth from disparity in complex scenes is difficult to evaluate and most likely different from the predicted depth based on geometry. This discrepancy is mediated by perceptual and cognitive factors, including resolution of the combination/conflict of pictorial, motion and binocular depth cues. This paper will review geometric predictions of depth from disparity and present the results of experiments which assess perceived S3D depth and the effect of the complexity of scene content.\n},\n\taddress = {Liege, Belgium},\n\tauthor = {Benzeroual, K. and Wilcox, L.M. and Kazimi, A. and Allison, R. S.},\n\tbooktitle = {IC3D 2011},\n\tdate-added = {2012-04-30 18:56:17 -0400},\n\tdate-modified = {2014-09-26 02:24:54 +0000},\n\tdoi = {10.1109/IC3D.2011.6584389},\n\tkeywords = {Stereopsis},\n\tmonth = {12},\n\tpages = {59.1-59.8},\n\ttitle = {On the Distinction between Perceived and Predicted Depth in {S3D} Films},\n\turl = {http://percept.eecs.yorku.ca/papers/06584389.pdf},\n\turl-1 = {http://dx.doi.org/10.1109/IC3D.2011.6584389},\n\tyear = {2011},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/06584389.pdf},\n\turl-2 = {https://doi.org/10.1109/IC3D.2011.6584389}}\n\n
\n
\n\n\n
\n A primary concern when making stereoscopic 3D (S3D) movies is to promote an effective and comfortable S3D experience for the audience when displayed on the screen. The amount of depth produced on-screen can be controlled using a variety of parameters. Many of these are lighting related such as lighting architecture and technology. Others are optical or positional and thus have a geometrical effect including camera interaxial distance, camera convergence, lens properties, viewing distance and angle, screen/projector properties and viewer anatomy (interocular distance). The amount of estimated depth from disparity alone can be precisely predicted from simple trigonometry; however, perceived depth from disparity in complex scenes is difficult to evaluate and most likely different from the predicted depth based on geometry. This discrepancy is mediated by perceptual and cognitive factors, including resolution of the combination/conflict of pictorial, motion and binocular depth cues. This paper will review geometric predictions of depth from disparity and present the results of experiments which assess perceived S3D depth and the effect of the complexity of scene content. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distortions of Space in Stereoscopic 3D Content.\n \n \n \n \n\n\n \n Benzeroual, K., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In SMPTE International Conference on Stereoscopic 3D for Media and Entertainment, SMPTE Conf. Proc., volume 2011( no. 6), pages 1-10, June 21-22 2011. SMPTE\n \n\n\n\n
\n\n\n\n \n \n \"Distortions-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Benzeroual:2011yt,\n\tabstract = {In S3D film, many factors affect the relationship between the depth in the acquired scene and depth eventually produced by the stereoscopic display. Many are geometric including camera interaxial, camera convergence, lens properties, viewing distance and angle, screen\\projector properties and anatomy (interocular). Spatial distortions follow at least in part from geometry (including the cardboard cut-out effect, miniaturization\\gigantism, space-size distortion, and object-speed distortion), and can cause a poor S3D experience. However, it is naive to expect spatial distortion to be specified only by geometry --- visual experience is heavily influenced by perceptual and cognitive factors. This paper will review geometrical predictions and present the results of experiments which assess S3D distortions in the context of content, cognitive and perceptual influences, and individual differences. We will suggest ways to assess the influence of acquisition and display parameters and to mitigate unwanted perceptual phenomena.},\n\tannote = {New York, June 21-22, 2011},\n\tauthor = {Benzeroual, K. and Allison, R.S. and Wilcox, L.M.},\n\tbooktitle = {{SMPTE} International Conference on Stereoscopic 3D for Media and Entertainment, {SMPTE} Conf. Proc.},\n\tdate-added = {2011-05-11 11:05:31 -0400},\n\tdate-modified = {2016-01-03 03:26:01 +0000},\n\tdoi = {10.5594/M001420},\n\tkeywords = {Stereopsis},\n\tmonth = {June 21-22},\n\tpages = {1-10},\n\tpublisher = {{SMPTE}},\n\ttitle = {Distortions of Space in Stereoscopic 3D Content},\n\turl-1 = {http://dx.doi.org/10.5594/M001420},\n\tvolume = {2011( no. 6)},\n\tyear = {2011},\n\turl-1 = {https://doi.org/10.5594/M001420}}\n\n
\n
\n\n\n
\n In S3D film, many factors affect the relationship between the depth in the acquired scene and depth eventually produced by the stereoscopic display. Many are geometric including camera interaxial, camera convergence, lens properties, viewing distance and angle, screen\\projector properties and anatomy (interocular). Spatial distortions follow at least in part from geometry (including the cardboard cut-out effect, miniaturization\\gigantism, space-size distortion, and object-speed distortion), and can cause a poor S3D experience. However, it is naive to expect spatial distortion to be specified only by geometry — visual experience is heavily influenced by perceptual and cognitive factors. This paper will review geometrical predictions and present the results of experiments which assess S3D distortions in the context of content, cognitive and perceptual influences, and individual differences. We will suggest ways to assess the influence of acquisition and display parameters and to mitigate unwanted perceptual phenomena.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopy and the Human Visual System.\n \n \n \n \n\n\n \n Banks, M. S., Read, J. R., Allison, R. S., & Watt, S. J.\n\n\n \n\n\n\n In SMPTE International Conference on Stereoscopic 3D for Media and Entertainment, SMPTE Conf. Proc., volume 2011 (no. 6), pages 2-31, 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopy-1\n  \n \n \n \"Stereoscopy-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Banks:2011fk,\n\tabstract = {Stereoscopic displays have become very important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, computer-assisted design, and more. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. It is important in these applications for stereo 3D imagery to create a faithful impression of the 3D structure of the scene being portrayed. It is also important that the viewer is comfortable and does not leave the experience with eye fatigue or a headache. And that the presentation of the stereo images does not create temporal artifacts like flicker or motion judder. \nHere we review current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: 1) Getting the geometry right; 2) depth cue interactions in stereo 3D media; 3) focusing and fixating on stereo images; and 4) temporal presentation protocols: Flicker, motion artifacts, and depth distortion. \n},\n\tauthor = {Martin S. Banks and Jenny R. Read and Robert S. Allison and Simon J. Watt},\n\tbooktitle = {{SMPTE} International Conference on Stereoscopic 3D for Media and Entertainment, {SMPTE} Conf. Proc.},\n\tdate-added = {2011-05-11 11:03:41 -0400},\n\tdate-modified = {2015-07-21 12:31:30 +0000},\n\tdoi = {10.5594/M001418},\n\tkeywords = {Stereopsis},\n\tpages = {2-31},\n\ttitle = {Stereoscopy and the Human Visual System},\n\turl-1 = {http://dx.doi.org/10.5594/M001418},\n\turl-2 = {http://www.etcenter.org/2011/04/the-smpte-second-annual-international-conference-on-stereoscopic-3d-for-media-entertainment/},\n\tvolume = {2011 (no. 6)},\n\tyear = {2011},\n\turl-1 = {https://doi.org/10.5594/M001418}}\n\n
\n
\n\n\n
\n Stereoscopic displays have become very important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, computer-assisted design, and more. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. It is important in these applications for stereo 3D imagery to create a faithful impression of the 3D structure of the scene being portrayed. It is also important that the viewer is comfortable and does not leave the experience with eye fatigue or a headache. And that the presentation of the stereo images does not create temporal artifacts like flicker or motion judder. Here we review current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: 1) Getting the geometry right; 2) depth cue interactions in stereo 3D media; 3) focusing and fixating on stereo images; and 4) temporal presentation protocols: Flicker, motion artifacts, and depth distortion. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of Simulated Visual Impairment.\n \n \n \n \n\n\n \n Vinnikov, M., & Allison, R.\n\n\n \n\n\n\n In 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction, Palo Alto, California, 02 2011. Stanford University\n \n\n\n\n
\n\n\n\n \n \n \"Evaluation-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Vinnikov:2011yj,\n\tabstract = {ABSTRACT\nWe have developed two novel evaluation techniques for gaze-contingent systems that simulate visual defects. These two techniques can be used to quantify simulated visual defects in visual distortion and visual blur. Experiments demonstrated that such techniques could be useful for quantification of visual field defects to set simulation parameters. They are also useful for quantitative evaluation of simulation fidelity based on measurement of the functional relation between the intended simulated defect and psychophysical results.\n\nAuthor Keywords\nGaze-contingent displays, foveation, visual simulations, evaluation of visual simulations},\n\taddress = {Palo Alto, California},\n\tauthor = {Vinnikov, M. and Allison, R.S.},\n\tbooktitle = {2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction},\n\tdate-added = {2011-05-09 12:33:17 -0400},\n\tdate-modified = {2011-05-18 15:56:28 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {02},\n\torganization = {Stanford University},\n\ttitle = {Evaluation of Simulated Visual Impairment},\n\turl-1 = {http://http://www.ci.seikei.ac.jp/nakano/GAZEWS_IUI2011/proceedings/[9-6]MargaritaVinnikov_shortpaper.pdf},\n\tyear = {2011}}\n\n
\n
\n\n\n
\n ABSTRACT We have developed two novel evaluation techniques for gaze-contingent systems that simulate visual defects. These two techniques can be used to quantify simulated visual defects in visual distortion and visual blur. Experiments demonstrated that such techniques could be useful for quantification of visual field defects to set simulation parameters. They are also useful for quantitative evaluation of simulation fidelity based on measurement of the functional relation between the intended simulated defect and psychophysical results. Author Keywords Gaze-contingent displays, foveation, visual simulations, evaluation of visual simulations\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The effect of crosstalk on depth magnitude in thin structures.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L.\n\n\n \n\n\n\n In Electronic Imaging: Stereoscopic Displays and Applications (an updated version appears in Journal of Electronic Imaging), volume 7863, pages 786313, 1-10, 2011. \n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Tsirlin:2011qh,\n\tabstract = {Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison \\& Wilcox, 2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences. To address this question we used a psychophysical paradigm, where observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce observers' satisfaction.},\n\tauthor = {Tsirlin, I. and Allison, R.S. and Wilcox, L.M.},\n\tbooktitle = {Electronic Imaging: Stereoscopic Displays and Applications (an updated version appears in Journal of Electronic Imaging)},\n\tdate-added = {2011-05-09 12:31:15 -0400},\n\tdate-modified = {2016-01-03 03:23:02 +0000},\n\tdoi = {10.1117/12.872141},\n\tkeywords = {Stereopsis},\n\tpages = {786313, 1-10},\n\ttitle = {The effect of crosstalk on depth magnitude in thin structures},\n\turl-1 = {http://dx.doi.org/10.1117/12.872141},\n\tvolume = {7863},\n\tyear = {2011},\n\turl-1 = {https://doi.org/10.1117/12.872141}}\n\n
\n
\n\n\n
\n Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison & Wilcox, 2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences. To address this question we used a psychophysical paradigm, where observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce observers' satisfaction.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Test Equipment Data Package for the Falcon-20 C-JITTER Experiment.\n \n \n \n\n\n \n Allison, R., & Zacher, J.\n\n\n \n\n\n\n Technical Report CSA-XXX, Canadian Space Agency Space Station Program, 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Allison:2011kx,\n\tauthor = {Allison, R.S. and Zacher, J.E.},\n\tdate-added = {2011-08-30 23:01:26 -0400},\n\tdate-modified = {2012-03-11 20:49:21 -0400},\n\tinstitution = {Canadian Space Agency Space Station Program},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {CSA-XXX},\n\ttitle = {Test Equipment Data Package for the Falcon-20 C-JITTER Experiment},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The effects of brightness on viewer preferences in an emerging display technology.\n \n \n \n\n\n \n Wilcox, L., & Allison, R.\n\n\n \n\n\n\n Technical Report report on OCE Project: YO-CR-10059-08, 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Wilcox:2011uq,\n\tauthor = {Wilcox, L.M. and Allison, R.S.},\n\tdate-added = {2011-08-30 22:59:13 -0400},\n\tdate-modified = {2011-08-30 22:59:53 -0400},\n\tinstitution = {report on OCE Project: YO-CR-10059-08},\n\tkeywords = {Misc.},\n\ttitle = {The effects of brightness on viewer preferences in an emerging display technology},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sensitivity to optical distortions in Laser-based projection systems.\n \n \n \n\n\n \n Wilcox, L., Irving, E., & Allison, R.\n\n\n \n\n\n\n Technical Report prepared for Christie Digital Systems, 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Wilcox:2011fk,\n\tauthor = {Wilcox, L.M. and Irving, E.L. and Allison, R.S.},\n\tdate-added = {2011-08-30 22:58:09 -0400},\n\tdate-modified = {2011-08-30 22:58:35 -0400},\n\tinstitution = {prepared for Christie Digital Systems},\n\tkeywords = {Misc.},\n\ttitle = {Sensitivity to optical distortions in Laser-based projection systems},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Report on the Sudbury NVD-Aided Aerial Forest Fire Detection Trials held during the Summer of 2010.\n \n \n \n\n\n \n Tomkins, L., Andriychuk, T., Zacher, J. E., Ballagh, M., McAlpine, R., Doig, T., Craig, G., Filliter, D., Milner, A., & Allison, R.\n\n\n \n\n\n\n Technical Report Technical Report CSE-2011-02, York University, Toronto, Canada, 2011.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Tomkins:2011lf,\n\taddress = {Toronto, Canada},\n\tauthor = {Tomkins, L. and Andriychuk, T. and Zacher, J. E. and Ballagh, M. and McAlpine, R. and Doig, T. and Craig, G. and Filliter, D. and Milner, A. and Allison, R.S.},\n\tdate-added = {2011-05-11 11:28:41 -0400},\n\tdate-modified = {2011-05-18 16:13:32 -0400},\n\tinstitution = {York University},\n\tkeywords = {Night Vision},\n\tnumber = {Technical Report CSE-2011-02},\n\ttitle = {Report on the Sudbury NVD-Aided Aerial Forest Fire Detection Trials held during the Summer of 2010},\n\tyear = {2011}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2010\n \n \n (5)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Pilot Gaze and Glideslope Control During Simulated Aircraft Landings.\n \n \n \n \n\n\n \n Kim, J., Palmisano, S., Ash, A., & Allison, R.\n\n\n \n\n\n\n ACM Transactions on Applied Perception, 7(3): 18 pages. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Pilot-1\n  \n \n \n \"Pilot-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison201018,\n\tabstract = {We examined the eye movements of pilots as they carried out simulated aircraft landings under day and night lighting conditions. Our five students and five certified pilots were instructed to quickly achieve and then maintain a constant 3-degree glideslope relative to the runway. However, both groups of pilots were found to make significant glideslope control errors, especially during simulated night approaches. We found that pilot gaze was directed most often toward the runway and to the ground region located immediately in front of the runway, compared to other visual scene features. In general, their gaze was skewed toward the near half of the runway and tended to follow the runway threshold as it moved on the screen. Contrary to expectations, pilot gaze was not consistently directed at the aircraft's simulated aimpoint (i.e., its predicted future touchdown point based on scene motion). However, pilots did tend to fly the aircraft so that this point was aligned with the runway threshold. We conclude that the supplementary out-of-cockpit visual cues available during day landing conditions facilitated glideslope control performance. The available evidence suggests that these supplementary visual cues are acquired through peripheral vision, without the need for active fixation.},\n\tauthor = {Kim, J. and Palmisano, S. and Ash, A. and Allison, R.S.},\n\tdate-added = {2011-05-06 11:38:11 -0400},\n\tdate-modified = {2012-07-02 17:28:59 -0400},\n\tdoi = {10.1145/1773965.1773969},\n\tjournal = {{ACM} Transactions on Applied Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {3},\n\tpages = {18 pages},\n\ttitle = {Pilot Gaze and Glideslope Control During Simulated Aircraft Landings},\n\turl-1 = {http://dx.doi.org/10.1145/1773965.1773969},\n\turl-2 = {http://dx.doi.org/10.1145/1773965.1773969},\n\tvolume = {7},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1145/1773965.1773969}}\n\n
\n
\n\n\n
\n We examined the eye movements of pilots as they carried out simulated aircraft landings under day and night lighting conditions. Our five students and five certified pilots were instructed to quickly achieve and then maintain a constant 3-degree glideslope relative to the runway. However, both groups of pilots were found to make significant glideslope control errors, especially during simulated night approaches. We found that pilot gaze was directed most often toward the runway and to the ground region located immediately in front of the runway, compared to other visual scene features. In general, their gaze was skewed toward the near half of the runway and tended to follow the runway threshold as it moved on the screen. Contrary to expectations, pilot gaze was not consistently directed at the aircraft's simulated aimpoint (i.e., its predicted future touchdown point based on scene motion). However, pilots did tend to fly the aircraft so that this point was aligned with the runway threshold. We conclude that the supplementary out-of-cockpit visual cues available during day landing conditions facilitated glideslope control performance. The available evidence suggests that these supplementary visual cues are acquired through peripheral vision, without the need for active fixation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling Locomotor Control: the advantages of a Mobile Gaze.\n \n \n \n \n\n\n \n Wilkie, R., Wann, J., & Allison, R.\n\n\n \n\n\n\n ACM Transactions on Applied Perception, 8(2): Article 9, 1-18. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Modelling-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Wilkie:2010lt,\n\tabstract = {In 1958, JJ Gibson put forward proposals on the visual control of locomotion. Research in the last 50 years has served to clarify the sources of visual and nonvisual information that contribute to successful steering, but has yet to determine how this information is optimally combined under conditions of uncertainty. Here, we test the conditions under which a locomotor robot with a mobile camera can steer effectively using simple visual and extra-retinal parameters to examine how such models cope with the noisy real-world visual and motor estimates that are available to humans. This applied modeling gives us an insight into both the advantages and limitations of using active gaze to sample information when steering.},\n\tauthor = {Wilkie, R.M. and Wann, J.P. and Allison, R.S.},\n\tdate-added = {2011-05-06 11:32:47 -0400},\n\tdate-modified = {2012-07-02 13:42:36 -0400},\n\tdoi = {10.1145/1870076.1870077},\n\tjournal = {{ACM} Transactions on Applied Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\tpages = {Article 9, 1-18},\n\ttitle = {Modelling Locomotor Control: the advantages of a Mobile Gaze},\n\turl-1 = {http://dx.doi.org/10.1145/1870076.1870077},\n\tvolume = {8},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1145/1870076.1870077}}\n\n
\n
\n\n\n
\n In 1958, JJ Gibson put forward proposals on the visual control of locomotion. Research in the last 50 years has served to clarify the sources of visual and nonvisual information that contribute to successful steering, but has yet to determine how this information is optimally combined under conditions of uncertainty. Here, we test the conditions under which a locomotor robot with a mobile camera can steer effectively using simple visual and extra-retinal parameters to examine how such models cope with the noisy real-world visual and motor estimates that are available to humans. This applied modeling gives us an insight into both the advantages and limitations of using active gaze to sample information when steering.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monocular occlusions determine the perceived shape and depth of occluding surfaces.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n Journal of Vision, 10(6:11): 1-12. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Monocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison201012,\n\tabstract = {Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.},\n\tauthor = {Tsirlin, I. and Wilcox, L. M. and Allison, R.S.},\n\tdate-modified = {2012-07-02 17:29:58 -0400},\n\tdoi = {10.1167/10.6.11},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {6:11},\n\tpages = {1-12},\n\ttitle = {Monocular occlusions determine the perceived shape and depth of occluding surfaces},\n\turl-1 = {http://dx.doi.org/10.1167/10.6.11},\n\tvolume = {10},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1167/10.6.11}}\n\n
\n
\n\n\n
\n Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual artifacts in random-dot stereograms.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n Perception, 39(3): 349-355. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2010349-355,\n\tabstract = {Unrestricted positioning of elements in random-dot stereograms with steep disparity gradients, such as stereo-transparent stereograms depicting overlaid surfaces, can produce perceptual artifacts similar to disparity noise. It is shown that these artifacts hinder the segregation of overlaid surfaces in transparent random-dot stereograms and thus disrupt the perception of stereo-transparency. This effect is intensified with increases in the overall element density of the stimuli. We outline the origin of this phenomenon and discuss techniques to prevent such artifacts.},\n\tauthor = {Tsirlin, I. and Wilcox, L. M. and Allison, R.S.},\n\tdate-modified = {2011-05-10 15:05:04 -0400},\n\tdoi = {10.1068/p6252},\n\tjournal = {Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {3},\n\tpages = {349-355},\n\ttitle = {Perceptual artifacts in random-dot stereograms},\n\turl-1 = {http://dx.doi.org/10.1068/p6252},\n\tvolume = {39},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1068/p6252}}\n\n
\n
\n\n\n
\n Unrestricted positioning of elements in random-dot stereograms with steep disparity gradients, such as stereo-transparent stereograms depicting overlaid surfaces, can produce perceptual artifacts similar to disparity noise. It is shown that these artifacts hinder the segregation of overlaid surfaces in transparent random-dot stereograms and thus disrupt the perception of stereo-transparency. This effect is intensified with increases in the overall element density of the stimuli. We outline the origin of this phenomenon and discuss techniques to prevent such artifacts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic perception of real depths at large distances.\n \n \n \n \n\n\n \n Palmisano, S., Gillam, B., Govan, D. G., Allison, R., & Harris, J. M.\n\n\n \n\n\n\n Journal of Vision, 10(6): 16. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Palmisano:2010qy,\n\tabstract = {There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.},\n\tauthor = {Palmisano, S. and Gillam, B. and Govan, D. G. and Allison, R.S. and Harris, J. M.},\n\tdate-modified = {2012-07-02 17:28:11 -0400},\n\tdoi = {10.1167/10.6.19},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {6},\n\tpages = {16},\n\ttitle = {Stereoscopic perception of real depths at large distances},\n\turl-1 = {http://dx.doi.org/10.1167/10.6.19},\n\tvolume = {10},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1167/10.6.19}}\n\n
\n
\n\n\n
\n There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The unassisted visual system on earth and in space.\n \n \n \n \n\n\n \n Harris, L. R., Jenkin, M., Jenkin, H., Dyde, R., Zacher, J., & Allison, R.\n\n\n \n\n\n\n Journal of Vestibular Research-Equilibrium and Orientation, 20(1-2): 25-30. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison201025-30,\n\tabstract = {Chuck Oman has been a guide and mentor for research in human perception and performance during space exploration for over 25 years. His research has provided a solid foundation for our understanding of how humans cope with the challenges and ambiguities of sensation and perception in space. In many of the environments associated with work in space the human visual system must operate with unusual combinations of visual and other perceptual cues. On Earth physical acceleration cues are normally available to assist the visual system in interpreting static and dynamic visual features. Here we consider two cases where the visual system is not assisted by such cues. Our first experiment examines perceptual stability when the normally available physical cues to linear acceleration are absent. Our second experiment examines perceived orientation when there is no assistance from the physically sensed direction of gravity. In both cases the effectiveness of vision is paradoxically reduced in the absence of physical acceleration cues. The reluctance to rely heavily on vision represents an important human factors challenge to efficient performance in the space environment.},\n\tauthor = {Harris, L. R. and Jenkin, M. and Jenkin, H. and Dyde, R. and Zacher, J. and Allison, R.S.},\n\tdate-modified = {2011-05-22 13:21:30 -0400},\n\tdoi = {10.3233/VES-2010-0352},\n\tjournal = {Journal of Vestibular Research-Equilibrium and Orientation},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {1-2},\n\tpages = {25-30},\n\ttitle = {The unassisted visual system on earth and in space},\n\turl-1 = {http://dx.doi.org/10.3233/VES-2010-0352},\n\turl-2 = {http://dx.doi.org/10.3233/VES-2010-0352},\n\tvolume = {20},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.3233/VES-2010-0352}}\n\n
\n
\n\n\n
\n Chuck Oman has been a guide and mentor for research in human perception and performance during space exploration for over 25 years. His research has provided a solid foundation for our understanding of how humans cope with the challenges and ambiguities of sensation and perception in space. In many of the environments associated with work in space the human visual system must operate with unusual combinations of visual and other perceptual cues. On Earth physical acceleration cues are normally available to assist the visual system in interpreting static and dynamic visual features. Here we consider two cases where the visual system is not assisted by such cues. Our first experiment examines perceptual stability when the normally available physical cues to linear acceleration are absent. Our second experiment examines perceived orientation when there is no assistance from the physically sensed direction of gravity. In both cases the effectiveness of vision is paradoxically reduced in the absence of physical acceleration cues. The reluctance to rely heavily on vision represents an important human factors challenge to efficient performance in the space environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cue conflict between disparity change and looming in the perception of motion in depth.\n \n \n \n \n\n\n \n Gonzalez, E. G., Allison, R., Ono, H., & Vinnikov, M.\n\n\n \n\n\n\n Vision Research, 50(2): 136-143. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Cue-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2010136-143,\n\tabstract = {We hypothesized that it is the conflict between various cues to distance that have produced results purportedly showing that vergence eye movements induced by disparity change are not an effective cue for depth. Single and compound stimuli were used to examine the perceived motion in depth (MID) produced by simulated motion oscillations specified by disparity, relative disparity, and/or looming. Estimations of the extent of MID and binocularly recorded eye movements showed that the vergence induced by disparity change is indeed an effective cue for motion in depth in conditions where looming information does not conflict with it. When looming and disparity are in conflict, looming is the stronger cue. (C) 2009 Elsevier Ltd. All rights reserved.},\n\tauthor = {Gonzalez, E. G. and Allison, R.S. and Ono, H. and Vinnikov, M.},\n\tdate-modified = {2011-05-10 14:44:22 -0400},\n\tdoi = {10.1016/j.visres.2009.11.005},\n\tjournal = {Vision Research},\n\tkeywords = {Motion in depth},\n\tnumber = {2},\n\tpages = {136-143},\n\ttitle = {Cue conflict between disparity change and looming in the perception of motion in depth},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2009.11.005},\n\tvolume = {50},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1016/j.visres.2009.11.005}}\n\n
\n
\n\n\n
\n We hypothesized that it is the conflict between various cues to distance that have produced results purportedly showing that vergence eye movements induced by disparity change are not an effective cue for depth. Single and compound stimuli were used to examine the perceived motion in depth (MID) produced by simulated motion oscillations specified by disparity, relative disparity, and/or looming. Estimations of the extent of MID and binocularly recorded eye movements showed that the vergence induced by disparity change is indeed an effective cue for motion in depth in conditions where looming information does not conflict with it. When looming and disparity are in conflict, looming is the stronger cue. (C) 2009 Elsevier Ltd. All rights reserved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Apparent motion during saccadic suppression periods.\n \n \n \n \n\n\n \n Allison, R., Schumacher, J., Sadr, S., & Herpers, R.\n\n\n \n\n\n\n Experimental Brain Research, 202(1): 155-169. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Apparent-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2010155-169,\n\tabstract = {Sensitivity to many visual stimuli, and, in particular, image displacement, is reduced during a change in fixation (saccade) compared to when the eye is still. In these experiments, we studied the sensitivity of observers to ecologically relevant image translations of large, complex, real world scenes either during horizontal saccades or during fixation. In the first experiment, we found that such displacements were much less detectable during saccades than during fixation. Qualitatively, even when trans-saccadic scene changes were detectible, they were less salient and appeared slower than equivalent changes in the absence of a saccade. Two further experiments followed up on this observation and estimated the perceived magnitude of trans-saccadic apparent motion using a two-interval forced-choice procedure (Experiment 2) and a magnitude estimation procedure (Experiment 3). Both experiments suggest that trans-saccadic displacements were perceived as smaller than equivalent inter-saccadic displacements. We conclude that during saccades, the magnitude of the apparent motion signal is attenuated as well as its detectability.},\n\tauthor = {Allison, R.S. and Schumacher, J. and Sadr, S. and Herpers, R.},\n\tdate-modified = {2011-05-11 13:32:40 -0400},\n\tdoi = {10.1007/s00221-009-2120-y},\n\tjournal = {Experimental Brain Research},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {1},\n\tpages = {155-169},\n\ttitle = {Apparent motion during saccadic suppression periods},\n\turl-1 = {http://dx.doi.org/10.1007/s00221-009-2120-y},\n\tvolume = {202},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1007/s00221-009-2120-y}}\n\n
\n
\n\n\n
\n Sensitivity to many visual stimuli, and, in particular, image displacement, is reduced during a change in fixation (saccade) compared to when the eye is still. In these experiments, we studied the sensitivity of observers to ecologically relevant image translations of large, complex, real world scenes either during horizontal saccades or during fixation. In the first experiment, we found that such displacements were much less detectable during saccades than during fixation. Qualitatively, even when trans-saccadic scene changes were detectible, they were less salient and appeared slower than equivalent changes in the absence of a saccade. Two further experiments followed up on this observation and estimated the perceived magnitude of trans-saccadic apparent motion using a two-interval forced-choice procedure (Experiment 2) and a magnitude estimation procedure (Experiment 3). Both experiments suggest that trans-saccadic displacements were perceived as smaller than equivalent inter-saccadic displacements. We conclude that during saccades, the magnitude of the apparent motion signal is attenuated as well as its detectability.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n .\n \n \n \n \n\n\n \n Allison, R., Brandwood, T., Vinnikov, M., Zacher, J., Jennings, S., Macuda, T., Thomas, P., & Palmisano, S.\n\n\n \n\n\n\n Psychophysics of night vision device halo, pages 123-140. Niall, K., editor(s). Springer-Verlag, New York, NY, 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Psychophysics-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{Allison:2010yg,\n\tabstract = {In modern Night Vision Devices (NVDs) `halo' around bright light sources remains a salient imaging artifact. Although a common feature of image intensified imagery, little is known of the perceptual and operational effects of this device limitation. This paper describes two related sets of experiments. In the first set of experiments, we provide quantitative measurements of Night Vision Device (NVD) halos formed by light sources as a function of intensity and distance. This characterization allows for analysis of the possible effects of halo on human perception through NVDs. In the second set of experiments, the effects of halation on the perception of depth and environmental layout are investigated psychophysically. The custom simulation environment used and results from psychophysical experiments designed to analyze halo-induced errors in slope estimation are presented. Accurate simulation of image intensifier physics and NVD scene modeling is challenging and computationally demanding, yet needs to be performed in real-time at high frame rates and at high-resolution in advanced military simulators. Given the constraints of the real-time simulation, it is important to understand how NVD artifacts impact task performance in order to make rational engineering decisions about the required level of fidelity of the NVD simulation. A salient artifact of NVD viewing is halo, the phenomenon where the image of a bright light source appears surrounded by disc-like halo. High-fidelity physical modeling of these halo phenomena would be computationally expensive. To evaluate the level of approximation that would be sufficient for training purposes human factors data is required.\n\nNVD halos generated by light sources in a scene have a size that is approximately invariant with intensity and distance. Objective and subjective measures of halo geometry indicate that halo size, when halo is present, is relatively invariant of target distance or intensity. This property results in perceptual distortions and strong illusions with isolated stimuli. In complex scenes, systematic distortions of slant are predicted due to an imposed texture gradient created by the halo. We investigated this hypothesis in psychophysical experiments. The results suggest that perception of slant and glideslope in complex scenes is remarkably tolerant of texture gradients imposed by NVG halo. These results are discussed in terms of NVG simulation and of the ability of human operators to compensate for perceptual distortions.\n},\n\taddress = {New York, NY},\n\tauthor = {Allison, R.S. and Brandwood, T. and Vinnikov, M. and Zacher, J.E. and Jennings, S. and Macuda, T. and Thomas, P.J. and Palmisano, S.A.},\n\tbooktitle = {Vision and Displays for Military and Security Applications: the Advanced Deployable Day/Night Simulation Project},\n\tdate-added = {2011-05-06 10:53:34 -0400},\n\tdate-modified = {2014-09-26 02:19:07 +0000},\n\tdoi = {10.1007/978-1-4419-1723-2_10},\n\teditor = {K. Niall},\n\tkeywords = {Night Vision},\n\tpages = {123-140},\n\tpublisher = {Springer-Verlag},\n\trating = {2},\n\ttitle = {Psychophysics of night vision device halo},\n\turl-1 = {http://dx.doi.org/10.1007/978-1-4419-1723-2_10},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1007/978-1-4419-1723-2_10}}\n\n
\n
\n\n\n
\n In modern Night Vision Devices (NVDs) `halo' around bright light sources remains a salient imaging artifact. Although a common feature of image intensified imagery, little is known of the perceptual and operational effects of this device limitation. This paper describes two related sets of experiments. In the first set of experiments, we provide quantitative measurements of Night Vision Device (NVD) halos formed by light sources as a function of intensity and distance. This characterization allows for analysis of the possible effects of halo on human perception through NVDs. In the second set of experiments, the effects of halation on the perception of depth and environmental layout are investigated psychophysically. The custom simulation environment used and results from psychophysical experiments designed to analyze halo-induced errors in slope estimation are presented. Accurate simulation of image intensifier physics and NVD scene modeling is challenging and computationally demanding, yet needs to be performed in real-time at high frame rates and at high-resolution in advanced military simulators. Given the constraints of the real-time simulation, it is important to understand how NVD artifacts impact task performance in order to make rational engineering decisions about the required level of fidelity of the NVD simulation. A salient artifact of NVD viewing is halo, the phenomenon where the image of a bright light source appears surrounded by disc-like halo. High-fidelity physical modeling of these halo phenomena would be computationally expensive. To evaluate the level of approximation that would be sufficient for training purposes human factors data is required. NVD halos generated by light sources in a scene have a size that is approximately invariant with intensity and distance. Objective and subjective measures of halo geometry indicate that halo size, when halo is present, is relatively invariant of target distance or intensity. This property results in perceptual distortions and strong illusions with isolated stimuli. In complex scenes, systematic distortions of slant are predicted due to an imposed texture gradient created by the halo. We investigated this hypothesis in psychophysical experiments. The results suggest that perception of slant and glideslope in complex scenes is remarkably tolerant of texture gradients imposed by NVG halo. These results are discussed in terms of NVG simulation and of the ability of human operators to compensate for perceptual distortions. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Comparing depth interval estimates with motion parallax and stereopsis at distances beyond interaction space.\n \n \n \n\n\n \n Govan, D., Gillam, B., Palmisano, S. A., & Allison, R. S.\n\n\n \n\n\n\n In 37th Australasian Experimental Psychology Conference. 2010.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Govan:2010ys,\n\tauthor = {Govan, D. and Gillam, B. and Palmisano, S. A. and Allison, R. S.},\n\tbooktitle = {37th Australasian Experimental Psychology Conference},\n\tdate-added = {2012-08-13 20:01:55 +0000},\n\tdate-modified = {2012-08-13 20:01:55 +0000},\n\tkeywords = {Stereopsis},\n\ttitle = {Comparing depth interval estimates with motion parallax and stereopsis at distances beyond interaction space},\n\tyear = {2010}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interactions between monocular occlusions and binocular disparity in the perceived depth of illusory surfaces.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 10, pages 373. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Interactions-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tsirlin:2010rp,\n\tauthor = {Tsirlin, I. and Wilcox, L. M. and Allison, R.S.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2011-05-06 14:54:55 -0400},\n\tdate-modified = {2012-07-02 13:42:36 -0400},\n\tdoi = {10.1167/10.7.373},\n\tkeywords = {Depth perception},\n\tnumber = {7},\n\tpages = {373},\n\ttitle = {Interactions between monocular occlusions and binocular disparity in the perceived depth of illusory surfaces},\n\turl-1 = {http://dx.doi.org/10.1167/10.7.373},\n\tvolume = {10},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1167/10.7.373}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Infrared Tracking of the Near Triad.\n \n \n \n \n\n\n \n Bogdan, N., Allison, R., & Suryakumar, R.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision,, volume 10, pages 507. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Infrared-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Bogdan:2010mb,\n\tabstract = {The oculomotor response when viewing a near target is characterized by `the near triad': pupil miosis (constriction), binocular convergence and increased accommodation. Most existing eye-tracking systems lack the ability to measure all three of these parameters and are usually specialized to handle only one. Systems that can measure the complete near triad suffer from slow measurement rates, off-line analysis or are cumbersome and inconvenient to use. Singular specialized systems are usually combined ad-hoc but such systems are often complex in architecture and suffer severe limitations in runtime. We describe a video-based eye tracking system based on eccentric photorefraction that allows for remote, high-speed measurement of all three components of the near triad. This provides for precise, simultaneous measurement of oculomotor dynamics as well as having the benefit of being safe and non-intrusive. An extended infrared source illuminated the subject's eye. The corneal reflex and `bright pupil' reflections of this source were imaged by an infrared sensitive camera and used to track gaze direction and pupil diameter. Such eccentric illumination combined with a knife-edge camera aperture allowed the accommodative state of the eye to be estimated from measurements of the gradient of image intensity across the pupil. Real-time measurements are facilitated by detection of Purkinje images to define areas of interest for each pupil followed by pupil edge detection and fitting to an ellipse model. Once the pupils are located, data about the brightness profile, diameter, corneal reflex and pupil center are extracted and processed to calculate the near triad. The system will be used in ongoing experiments assessing the role of oculomotor cues in perception of motion in depth. },\n\tauthor = {Bogdan, N. and Allison, R.S. and Suryakumar, R.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision,},\n\tdate-added = {2011-05-06 14:52:40 -0400},\n\tdate-modified = {2012-07-02 13:42:36 -0400},\n\tdoi = {10.1167/10.7.507},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {7},\n\tpages = {507},\n\ttitle = {Infrared Tracking of the Near Triad},\n\turl-1 = {http://dx.doi.org/10.1167/10.7.507},\n\tvolume = {10},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1167/10.7.507}}\n\n
\n
\n\n\n
\n The oculomotor response when viewing a near target is characterized by `the near triad': pupil miosis (constriction), binocular convergence and increased accommodation. Most existing eye-tracking systems lack the ability to measure all three of these parameters and are usually specialized to handle only one. Systems that can measure the complete near triad suffer from slow measurement rates, off-line analysis or are cumbersome and inconvenient to use. Singular specialized systems are usually combined ad-hoc but such systems are often complex in architecture and suffer severe limitations in runtime. We describe a video-based eye tracking system based on eccentric photorefraction that allows for remote, high-speed measurement of all three components of the near triad. This provides for precise, simultaneous measurement of oculomotor dynamics as well as having the benefit of being safe and non-intrusive. An extended infrared source illuminated the subject's eye. The corneal reflex and `bright pupil' reflections of this source were imaged by an infrared sensitive camera and used to track gaze direction and pupil diameter. Such eccentric illumination combined with a knife-edge camera aperture allowed the accommodative state of the eye to be estimated from measurements of the gradient of image intensity across the pupil. Real-time measurements are facilitated by detection of Purkinje images to define areas of interest for each pupil followed by pupil edge detection and fitting to an ellipse model. Once the pupils are located, data about the brightness profile, diameter, corneal reflex and pupil center are extracted and processed to calculate the near triad. The system will be used in ongoing experiments assessing the role of oculomotor cues in perception of motion in depth. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth of Field in Stereoscopic Moving Images.\n \n \n \n \n\n\n \n Allison, R., Wilcox, L., & Elder, J.\n\n\n \n\n\n\n In SMTPE BOOT CAMP IV—The Next Dimension: 3D, Mobility and More. Toronto, Canada, June 8th-9th 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Depth-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2010qt,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S. and Wilcox, L.M. and Elder, J.},\n\tbooktitle = {SMTPE BOOT CAMP IV---The Next Dimension: 3D, Mobility and More},\n\tdate-added = {2011-05-06 14:39:03 -0400},\n\tdate-modified = {2011-05-18 15:46:28 -0400},\n\tkeywords = {Stereopsis},\n\tmonth = {June 8th-9th},\n\ttitle = {Depth of Field in Stereoscopic Moving Images},\n\turl-1 = {https://wiki.cse.yorku.ca/lab/percept/_media/public:ryerson_workshop.pdf},\n\tyear = {2010}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Night-Vision Device Aided Aerial Forest Fire Detection: Experience in a Controlled Test Grid.\n \n \n \n \n\n\n \n Andriychuk, T., Tomkins, L., Zacher, J., Ballagh, M., McAlpine, R., Doig, T., Jennings, S., Milner, A., & Allison, R.\n\n\n \n\n\n\n In Wildland Fire Canada 2010. Kitchener-Waterloo, Canada, October 5th-7th 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Night-Vision-1\n  \n \n \n \"Night-Vision-2\n  \n \n \n \"Night-Vision-3\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Andriychuk:2010zk,\n\taddress = {Kitchener-Waterloo, Canada},\n\tauthor = {Andriychuk, T. and Tomkins, L. and Zacher, J. and Ballagh, M. and McAlpine, R. and Doig, T. and Jennings, S. and Milner, A. and Allison, R.S.},\n\tbooktitle = {Wildland Fire Canada 2010},\n\tdate-added = {2011-05-06 14:21:43 -0400},\n\tdate-modified = {2011-05-18 16:08:06 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {October 5th-7th},\n\ttitle = {Night-Vision Device Aided Aerial Forest Fire Detection: Experience in a Controlled Test Grid},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/wildland fire 2010 abstract.pdf},\n\turl-2 = {http://www.wildlandfirecanada.ca/Presentations/Tuesday/Afernoon/New Tech/Tetyana/NVG Presentation final draft.pdf},\n\turl-3 = {http://www.wildlandfirecanada.ca/Presentations/Tuesday/Afernoon/New Tech/Tetyana/Tetyana.wmv},\n\tyear = {2010}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Is brighter always better? The effects of display and ambient luminance on preferences for digital signage.\n \n \n \n \n\n\n \n Guterman, P., Fukuda, K., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In Society for Information Display Annual Meeting, Seattle, Washington, 05 2010. Society for Information Display\n \n\n\n\n
\n\n\n\n \n \n \"IsPaper\n  \n \n \n \"Is-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Guterman:2010sv,\n\taddress = {Seattle, Washington},\n\tauthor = {Guterman, P. and Fukuda, K. and Wilcox, L.M. and Allison, R.S.},\n\tbooktitle = {Society for Information Display Annual Meeting},\n\tdate-added = {2011-05-09 13:10:30 -0400},\n\tdate-modified = {2015-01-26 19:39:16 +0000},\n\tkeywords = {Misc.},\n\tmonth = {05},\n\torganization = {Society for Information Display},\n\ttitle = {Is brighter always better? The effects of display and ambient luminance on preferences for digital signage},\n\turl = {http://percept.eecs.yorku.ca/papers/SID_2010_Guterman-Final%20small.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/SID_2010_Guterman-Final%20small.pdf},\n\tyear = {2010},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/SID_2010_Guterman-Final%20small.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensitivity to monocular occlusions in stereoscopic imagery: Implications for S3D content creation, distribution and exhibition.\n \n \n \n \n\n\n \n Wilcox, L., Tsirlin, I., & Allison, R.\n\n\n \n\n\n\n In SMPTE International Conference on Stereoscopic 3D for Media and Entertainment, New York, 07 2010. \n \n\n\n\n
\n\n\n\n \n \n \"SensitivityPaper\n  \n \n \n \"Sensitivity-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Wilcox:2010ja,\n\taddress = {New York},\n\tauthor = {Wilcox, L.M. and Tsirlin, I. and Allison, R.S.},\n\tbooktitle = {{SMPTE} International Conference on Stereoscopic 3D for Media and Entertainment},\n\tdate-added = {2011-05-09 13:08:42 -0400},\n\tdate-modified = {2015-01-26 19:35:03 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {07},\n\ttitle = {Sensitivity to monocular occlusions in stereoscopic imagery: Implications for {S3D} content creation, distribution and exhibition},\n\turl = {http://percept.eecs.yorku.ca/papers/SMPTE%20manuscript%20LW%202010.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/SMPTE%20manuscript%20LW%202010.pdf},\n\tyear = {2010},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/SMPTE%20manuscript%20LW%202010.pdf}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Using VPython for psychophysics.\n \n \n \n \n\n\n \n Guterman, P., Allison, R., & Palmisano, S.\n\n\n \n\n\n\n In Proceedings of the 26th Annual Meeting of the International Society for Psychophysics (ISP), Padua, Italy, 10 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Using-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Guterman:2010vg,\n\taddress = {Padua, Italy},\n\tauthor = {Guterman, P. and Allison, R.S. and Palmisano, S.A.},\n\tbooktitle = {Proceedings of the 26th Annual Meeting of the International Society for Psychophysics (ISP)},\n\tdate-added = {2011-05-09 13:07:10 -0400},\n\tdate-modified = {2011-05-18 16:28:52 -0400},\n\tkeywords = {Misc.},\n\tmonth = {10},\n\ttitle = {Using VPython for psychophysics},\n\turl-1 = {http://yorku.academia.edu/PearlGuterman/Papers/319182/Using_VPython_for_psychophysics},\n\tyear = {2010}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Thermal imaging as a way to classify cognitive workload.\n \n \n \n \n\n\n \n Stemberger, J., Allison, R., & Schnell, T.\n\n\n \n\n\n\n In Seventh Canadian Conference on Computer and Robot Vision (CRV2010), Ottawa, Canada, May 31st- June 2nd, 2010 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Thermal-1\n  \n \n \n \"Thermal-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Stemberger:2010qm,\n\tabstract = {As epitomized in DARPA's 'Augmented Cognition' program, next generation avionics suites are envisioned as sensing, inferring, responding to and ultimately enhancing the cognitive state and capabilities of the pilot. Inferring such complex behavioural states from imagery of the face is a challenging task and multimodal approaches have been favoured for robustness. We have developed and evaluated the feasibility of a system for estimation of cognitive\nworkload levels based on analysis of facial skin temperature. The system is based on thermal infrared imaging of the face, head pose estimation, measurement of the temperature variation across regions of the face and an artificial neural network classifier. The technique was evaluated in a controlled laboratory experiment using subjective measures of workload across tasks as a standard. The system was capable of accurately classifying mental workload into high, medium and low workload levels 81\\% of the time. The suitability of facial thermography for integration into a multimodal augmented cognition sensor suite is discussed.},\n\taddress = {Ottawa, Canada},\n\tauthor = {Stemberger, J. and Allison, R.S. and Schnell, T.},\n\tbooktitle = {Seventh Canadian Conference on Computer and Robot Vision (CRV2010)},\n\tdate-added = {2011-05-06 12:12:04 -0400},\n\tdate-modified = {2011-05-18 16:22:26 -0400},\n\tdoi = {10.1109/CRV.2010.37},\n\tkeywords = {Neural Avionics},\n\tmonth = {May 31st- June 2nd, 2010},\n\ttitle = {Thermal imaging as a way to classify cognitive workload},\n\turl-1 = {http://dx.doi.org/10.1109/CRV.2010.37},\n\turl-2 = {http://dx.doi.org/10.1109/CRV.2010.37},\n\tyear = {2010},\n\turl-1 = {https://doi.org/10.1109/CRV.2010.37}}\n\n
\n
\n\n\n
\n As epitomized in DARPA's 'Augmented Cognition' program, next generation avionics suites are envisioned as sensing, inferring, responding to and ultimately enhancing the cognitive state and capabilities of the pilot. Inferring such complex behavioural states from imagery of the face is a challenging task and multimodal approaches have been favoured for robustness. We have developed and evaluated the feasibility of a system for estimation of cognitive workload levels based on analysis of facial skin temperature. The system is based on thermal infrared imaging of the face, head pose estimation, measurement of the temperature variation across regions of the face and an artificial neural network classifier. The technique was evaluated in a controlled laboratory experiment using subjective measures of workload across tasks as a standard. The system was capable of accurately classifying mental workload into high, medium and low workload levels 81% of the time. The suitability of facial thermography for integration into a multimodal augmented cognition sensor suite is discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contingency evaluation of gaze-contingent displays for real-time visual field simulations.\n \n \n \n \n\n\n \n Vinnikov, M., & Allison, R. S.\n\n\n \n\n\n\n In Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications, pages 263-266, Austin, Texas, 2010. ACM\n \n\n\n\n
\n\n\n\n \n \n \"ContingencyPaper\n  \n \n \n \"Contingency-1\n  \n \n \n \"Contingency-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2010263-266,\n\taddress = {Austin, Texas},\n\tauthor = {Vinnikov, Margarita and Allison, Robert S.},\n\tbooktitle = {Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications},\n\tdate-modified = {2015-01-26 19:41:10 +0000},\n\tdoi = {10.1145/1743666.1743728},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {263-266},\n\tpublisher = {ACM},\n\ttitle = {Contingency evaluation of gaze-contingent displays for real-time visual field simulations},\n\turl = {http://percept.eecs.yorku.ca/papers/p263-vinnikov.pdf},\n\turl-1 = {http://dx.doi.org/10.1145/1743666.1743728},\n\tyear = {2010},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/p263-vinnikov.pdf},\n\turl-2 = {https://doi.org/10.1145/1743666.1743728}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Report on the Pembroke NVD-Aided Aerial Forest Fire Detection Trials.\n \n \n \n \n\n\n \n Andriychuk, T., Tomkins, L., Zacher, J. E., Ballagh, M., McAlpine, R., Milner, A., & Allison, R.\n\n\n \n\n\n\n Technical Report Technical Report CSE-2010-09, York University, Toronto, Canada, 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Report-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Andriychuk:2010tg,\n\tabstract = {Executive Summary\n\nEarly detection of forest fires, while still in their emergent stages, could greatly improve suppression effectiveness and reduce overall costs. When used for aerial detection patrols, night vision devices (NVD) have potential to improve response times to potential starts and to improve sensitivity. The flight trials described in this report were designed to explore the potential for NVD aided detection in a real operational context but with experimental control and `ground truth' knowledge of the fire source.\n\nA series of flight trials were run April 22 to 25, 2010 in the vicinity of the city of Pembroke in the Ottawa Valley region of Eastern Ontario. Small test fires were set at known locations within the Ontario Ministry of Natural Resources (OMNR) infrared (IR) test grid and continuously monitored by remote data loggers. NVD flight detection patrols for an EC130 helicopter were planned in the region of the IR grid. The observers were the only members of the flight crew responsible for detecting fires and had no knowledge of the fire configuration or location. Each observer flew two detection patrols on separate nights with different configurations of sources.\n\nThe average detection distance for a fire across all nights was 3,584m (95\\%CI: 2,697m to 4,471m). The average discrimination distance, where a source could be confidently determined to be a fire or distracter, was 1,193m (95\\%CI: 944m to 1,442m). The hit rate was 68\\% over the course of the flight trials, higher than expectations based on the small fire sources and novice observers. The hit rate showed improvement over time, likely as observers became familiar with the task and terrain. There was only a single false alarm, when an observer falsely identified a non-fire target as a fire. Correct rejections were quite common (30 events), likely due to the relatively large number of environmental lights in the test area.\n\nThe results demonstrate that small fires can be detected and reliably discriminated using NVDs at night from distances compatible with typical daytime aerial detection patrols. The trials provide guidance on altitude and spacing requirements for detection patrols and for cues to discriminate environmental light sources from fires. Analysis of detection performance in ongoing field experiments will help to evaluate the utility of and determine best practices for NVD-aided detection of wildland fires.},\n\taddress = {Toronto, Canada},\n\tauthor = {Andriychuk, T. and Tomkins, L. and Zacher, J. E. and Ballagh, M. and McAlpine, R. and Milner, A. and Allison, R.S.},\n\tdate-added = {2011-05-11 11:30:48 -0400},\n\tdate-modified = {2011-05-18 16:13:00 -0400},\n\tinstitution = {York University},\n\tkeywords = {Night Vision},\n\tnumber = {Technical Report CSE-2010-09},\n\ttitle = {Report on the Pembroke NVD-Aided Aerial Forest Fire Detection Trials},\n\turl-1 = {http://www.cse.yorku.ca/techreports/2010/?abstract=CSE-2010-09},\n\tyear = {2010}}\n\n
\n
\n\n\n
\n Executive Summary Early detection of forest fires, while still in their emergent stages, could greatly improve suppression effectiveness and reduce overall costs. When used for aerial detection patrols, night vision devices (NVD) have potential to improve response times to potential starts and to improve sensitivity. The flight trials described in this report were designed to explore the potential for NVD aided detection in a real operational context but with experimental control and `ground truth' knowledge of the fire source. A series of flight trials were run April 22 to 25, 2010 in the vicinity of the city of Pembroke in the Ottawa Valley region of Eastern Ontario. Small test fires were set at known locations within the Ontario Ministry of Natural Resources (OMNR) infrared (IR) test grid and continuously monitored by remote data loggers. NVD flight detection patrols for an EC130 helicopter were planned in the region of the IR grid. The observers were the only members of the flight crew responsible for detecting fires and had no knowledge of the fire configuration or location. Each observer flew two detection patrols on separate nights with different configurations of sources. The average detection distance for a fire across all nights was 3,584m (95%CI: 2,697m to 4,471m). The average discrimination distance, where a source could be confidently determined to be a fire or distracter, was 1,193m (95%CI: 944m to 1,442m). The hit rate was 68% over the course of the flight trials, higher than expectations based on the small fire sources and novice observers. The hit rate showed improvement over time, likely as observers became familiar with the task and terrain. There was only a single false alarm, when an observer falsely identified a non-fire target as a fire. Correct rejections were quite common (30 events), likely due to the relatively large number of environmental lights in the test area. The results demonstrate that small fires can be detected and reliably discriminated using NVDs at night from distances compatible with typical daytime aerial detection patrols. The trials provide guidance on altitude and spacing requirements for detection patrols and for cues to discriminate environmental light sources from fires. Analysis of detection performance in ongoing field experiments will help to evaluate the utility of and determine best practices for NVD-aided detection of wildland fires.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2009\n \n \n (5)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Coarse-fine dichotomies in human stereopsis.\n \n \n \n \n\n\n \n Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n Vision Res, 49(22): 2653-65. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Coarse-fine-1\n  \n \n \n \"Coarse-fine-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20092653-65,\n\tabstract = {There is a long history of research into depth percepts from very large disparities, beyond the fusion limit. Such diplopic stimuli have repeatedly been shown to provide reliable depth percepts. A number of researchers have pointed to differences between the processing of small and large disparities, arguing that they are subserved by distinct neural mechanisms. Other studies have pointed to a dichotomy between the processing of 1st- and 2nd-order stimuli. Here we review literature on the full range of disparity processing to determine how well different proposed dichotomies map onto one another, and to identify unresolved issues.},\n\tauthor = {Wilcox, L. M. and Allison, R.S.},\n\tdate-modified = {2011-05-10 11:11:27 -0400},\n\tdoi = {10.1016/j.visres.2009.06.004},\n\tjournal = {Vision Res},\n\tkeywords = {Stereopsis},\n\tnumber = {22},\n\tpages = {2653-65},\n\ttitle = {Coarse-fine dichotomies in human stereopsis},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2009.06.004},\n\turl-2 = {http://dx.doi.org/10.1016/j.visres.2009.06.004},\n\tvolume = {49},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1016/j.visres.2009.06.004}}\n\n
\n
\n\n\n
\n There is a long history of research into depth percepts from very large disparities, beyond the fusion limit. Such diplopic stimuli have repeatedly been shown to provide reliable depth percepts. A number of researchers have pointed to differences between the processing of small and large disparities, arguing that they are subserved by distinct neural mechanisms. Other studies have pointed to a dichotomy between the processing of 1st- and 2nd-order stimuli. Here we review literature on the full range of disparity processing to determine how well different proposed dichotomies map onto one another, and to identify unresolved issues.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A reevaluation of the tolerance to vertical misalignment in stereopsis.\n \n \n \n \n\n\n \n Fukuda, K., Wilcox, L., Allison, R., & Howard, I.\n\n\n \n\n\n\n Journal of Vision, 9(2): 1-8. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"A-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20091-8,\n\tabstract = {The stereoscopic system tolerates some vertical misalignment of the images in the eyes. for However, the reported tolerance for an isolated line stimulus (~4 degrees) is greater than a random-dot stereogram (RDS, ~45 arcmin). We hypothesized that the greater tolerance can be attributed to monoptic depth signals (E. Hering, 1861; M. Kaye, 1978; L. M. Wilcox, J. M. Harris, & S. P. McKee, 2007).  We manipulated the vertical misalignment of a pair of isolated stereoscopic dots to assess the contribution of each depth signal separately. For the monoptic stimuli, where only one half-image was present, equivalent horizontal and vertical offsets were imposed instead of disparity. Judgments of apparent depth were well above chance, though there was no conventional disparity signal. For the stereoscopic stimuli, one element was positioned at the midline where monoptic depth perception falls to chance but conventional disparity remains. Subjects lost the depth percept at a vertical misalignment of between 44 and 88 arcmin, which is much smaller than the limit found when both signals were provided. This tolerance for isolated stimuli is comparable to the reported tolerance for RDS. We conclude that previous reports of the greater tolerance to vertical misalignment for isolated stimuli arose from the use of monoptic depth signals.},\n\tauthor = {Fukuda, K. and Wilcox, L.M. and Allison, R.S. and Howard, I.P.},\n\tdate-modified = {2012-07-02 17:31:17 -0400},\n\tdoi = {10.1167/9.2.1},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {2},\n\tpages = {1-8},\n\ttitle = {A reevaluation of the tolerance to vertical misalignment in stereopsis},\n\turl-1 = {http://dx.doi.org/10.1167/9.2.1},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.2.1}}\n\n
\n
\n\n\n
\n The stereoscopic system tolerates some vertical misalignment of the images in the eyes. for However, the reported tolerance for an isolated line stimulus ( 4 degrees) is greater than a random-dot stereogram (RDS,  45 arcmin). We hypothesized that the greater tolerance can be attributed to monoptic depth signals (E. Hering, 1861; M. Kaye, 1978; L. M. Wilcox, J. M. Harris, & S. P. McKee, 2007). We manipulated the vertical misalignment of a pair of isolated stereoscopic dots to assess the contribution of each depth signal separately. For the monoptic stimuli, where only one half-image was present, equivalent horizontal and vertical offsets were imposed instead of disparity. Judgments of apparent depth were well above chance, though there was no conventional disparity signal. For the stereoscopic stimuli, one element was positioned at the midline where monoptic depth perception falls to chance but conventional disparity remains. Subjects lost the depth percept at a vertical misalignment of between 44 and 88 arcmin, which is much smaller than the limit found when both signals were provided. This tolerance for isolated stimuli is comparable to the reported tolerance for RDS. We conclude that previous reports of the greater tolerance to vertical misalignment for isolated stimuli arose from the use of monoptic depth signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular depth discrimination and estimation beyond interaction space.\n \n \n \n \n\n\n \n Allison, R., Gillam, B. J., & Vecellio, E.\n\n\n \n\n\n\n Journal of Vision, 9(1 Article 10): 1-14. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20091-14,\n\tabstract = {The benefits of binocular vision have been debated throughout the history of vision science yet few studies have considered its contribution beyond a viewing distance of a few meters. In the first set of experiments, we compared monocular and binocular performance on depth interval estimation and discrimination tasks at 4.5, 9.0 or 18.0 m. Under monocular conditions, perceived depth was significantly compressed. Binocular depth estimates were much nearer to veridical although also compressed. Regression-based precision measures were much more precise for binocular compared to monocular conditions (ratios between 2.1 and 48). We confirm that stereopsis supports reliable depth discriminations beyond typical laboratory distances. Furthermore, binocular vision can significantly improve both the accuracy and precision of depth estimation to at least 18 m. In another experiment, we used a novel paradigm that allowed the presentation of real binocular disparity stimuli in the presence of rich environmental cues to distance but not interstimulus depth. We found that the presence of environmental cues to distance greatly enhanced stereoscopic depth constancy at distances of 4.5 and 9.0 m. We conclude that stereopsis is an effective cue for depth discrimination and estimation for distances beyond those traditionally assumed. In normal environments, distance information from other sources such as perspective can be effective in scaling depth from disparity.},\n\tauthor = {Allison, R.S. and Gillam, B. J. and Vecellio, E.},\n\tdate-modified = {2012-07-02 17:41:57 -0400},\n\tdoi = {10.1167/9.1.10},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {1 Article 10},\n\tpages = {1-14},\n\ttitle = {Binocular depth discrimination and estimation beyond interaction space},\n\turl-1 = {http://dx.doi.org/10.1167/9.1.10},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.1.10}}\n\n
\n
\n\n\n
\n The benefits of binocular vision have been debated throughout the history of vision science yet few studies have considered its contribution beyond a viewing distance of a few meters. In the first set of experiments, we compared monocular and binocular performance on depth interval estimation and discrimination tasks at 4.5, 9.0 or 18.0 m. Under monocular conditions, perceived depth was significantly compressed. Binocular depth estimates were much nearer to veridical although also compressed. Regression-based precision measures were much more precise for binocular compared to monocular conditions (ratios between 2.1 and 48). We confirm that stereopsis supports reliable depth discriminations beyond typical laboratory distances. Furthermore, binocular vision can significantly improve both the accuracy and precision of depth estimation to at least 18 m. In another experiment, we used a novel paradigm that allowed the presentation of real binocular disparity stimuli in the presence of rich environmental cues to distance but not interstimulus depth. We found that the presence of environmental cues to distance greatly enhanced stereoscopic depth constancy at distances of 4.5 and 9.0 m. We conclude that stereopsis is an effective cue for depth discrimination and estimation for distances beyond those traditionally assumed. In normal environments, distance information from other sources such as perspective can be effective in scaling depth from disparity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic discrimination of the layout of ground surfaces.\n \n \n \n \n\n\n \n Allison, R., Gillam, B. J., & Palmisano, S. A.\n\n\n \n\n\n\n Journal of Vision, 9(12 Article 8): 1-11. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20091-11,\n\tabstract = {Safe and effective locomotion depends critically on judgements of the surface properties of the ground to be traversed. Little is known about the role of binocular vision in surface perception at distances relevant to visually guided locomotion in humans. Programmable arrays of illuminated targets were used to present sparsely textured surfaces with real depth at distances of 4.5 and 9.0 m. Psychophysical measurements of discrimination thresholds demonstrated a clear superiority for stereoscopic over monocular judgments of relative and absolute surface slant. Judgements of surface roughness in particular demonstrated a substantial binocular advantage. Binocular vision is thus shown to directly contribute to judgements of the layout of terrain up to at least 4.5 m, and its smoothness to at least 9.0 m. Hence binocular vision could support moment-to-moment wayfinding and path planning, especially when monocular cues are weak.},\n\tauthor = {Allison, R.S. and Gillam, B. J. and Palmisano, S. A.},\n\tdate-modified = {2012-07-02 17:37:07 -0400},\n\tdoi = {10.1167/9.12.8},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {12 Article 8},\n\tpages = {1-11},\n\ttitle = {Stereoscopic discrimination of the layout of ground surfaces},\n\turl-1 = {http://dx.doi.org/10.1167/9.12.8},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.12.8}}\n\n
\n
\n\n\n
\n Safe and effective locomotion depends critically on judgements of the surface properties of the ground to be traversed. Little is known about the role of binocular vision in surface perception at distances relevant to visually guided locomotion in humans. Programmable arrays of illuminated targets were used to present sparsely textured surfaces with real depth at distances of 4.5 and 9.0 m. Psychophysical measurements of discrimination thresholds demonstrated a clear superiority for stereoscopic over monocular judgments of relative and absolute surface slant. Judgements of surface roughness in particular demonstrated a substantial binocular advantage. Binocular vision is thus shown to directly contribute to judgements of the layout of terrain up to at least 4.5 m, and its smoothness to at least 9.0 m. Hence binocular vision could support moment-to-moment wayfinding and path planning, especially when monocular cues are weak.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n .\n \n \n \n\n\n \n Wilcox, L., & Allison, R.\n\n\n \n\n\n\n Binocular Vision and Stereopsis, pages 208-212. Goldstein, E. B., editor(s). Sage Publications Inc, Thousand Oaks, CA, 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{Wilcox:2009mn,\n\taddress = {Thousand Oaks, CA},\n\tauthor = {Wilcox, L.M. and Allison, R.S.},\n\tbooktitle = {Encyclopedia of Perception},\n\tdate-added = {2011-05-06 11:18:20 -0400},\n\tdate-modified = {2014-07-19 21:28:36 +0000},\n\teditor = {E. Bruce Goldstein},\n\tkeywords = {Stereopsis},\n\tpages = {208-212},\n\tpublisher = {Sage Publications Inc},\n\ttitle = {Binocular Vision and Stereopsis},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (16)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Binocular depth interval estimation beyond interaction space.\n \n \n \n\n\n \n Govan, D., Gillam, B., Palmisano, S. A., Allison, R. S., & Harris, J. M.\n\n\n \n\n\n\n In 36th Australasian Experimental Psychology Conference. April 17-19, 2009 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Govan:2009vn,\n\tauthor = {Govan, D. and Gillam, B. and Palmisano, S. A. and Allison, R. S. and Harris, J. M.},\n\tbooktitle = {36th Australasian Experimental Psychology Conference},\n\tdate-added = {2012-08-13 19:51:22 +0000},\n\tdate-modified = {2012-08-13 20:01:21 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {April 17-19, 2009},\n\ttitle = {Binocular depth interval estimation beyond interaction space},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic depth magnitudes at greater distances in an old steam railway tunnel.\n \n \n \n \n\n\n \n Gillam, B., Palmisano, S. A., Govan, D., Allison, R., & Harris, J.\n\n\n \n\n\n\n In In 32nd European Conference on Visual Perception, volume 38, of 59. Regensburgh, Germany, 08 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Gillam:2009wk,\n\taddress = {Regensburgh, Germany},\n\tauthor = {Gillam, B. and Palmisano, S. A. and Govan, D. and Allison, R.S. and Harris, J.},\n\tbooktitle = {In 32nd European Conference on Visual Perception},\n\tdate-added = {2011-05-09 13:25:12 -0400},\n\tdate-modified = {2014-09-09 19:05:06 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {08},\n\tnumber = {Suppl},\n\tseries = {59},\n\ttitle = {Stereoscopic depth magnitudes at greater distances in an old steam railway tunnel},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v091008},\n\tvolume = {38},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effect of Differential Interocular Blur on Depth Perception From Fine and Coarse Disparities.\n \n \n \n\n\n \n Smith, C., Wilcox, L., Allison, R., Karanovic, O., & Wilkinson, F.\n\n\n \n\n\n\n In CVR 2009: Vision in 3D Environments, BI-15. 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Smith:2009dd,\n\tauthor = {Smith, C.E. and Wilcox, L.M. and Allison, R.S. and Karanovic, O. and Wilkinson, F.},\n\tbooktitle = {CVR 2009: Vision in 3D Environments, BI-15},\n\tdate-added = {2011-05-09 11:10:36 -0400},\n\tdate-modified = {2011-05-18 15:48:57 -0400},\n\tkeywords = {Depth perception},\n\ttitle = {Effect of Differential Interocular Blur on Depth Perception From Fine and Coarse Disparities},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n On the contribution of monoptic depth and binocular disparity to depth from diplopic images.\n \n \n \n\n\n \n Fukuda, K, Wilcox, L., Allison, R., & Howard, I.\n\n\n \n\n\n\n In CVR 2009: Vision in 3D Environments, BI-4. 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fukuda:2009nb,\n\tauthor = {Fukuda, K and Wilcox, L.M. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {CVR 2009: Vision in 3D Environments, BI-4},\n\tdate-added = {2011-05-09 11:09:19 -0400},\n\tdate-modified = {2011-05-18 16:08:38 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {On the contribution of monoptic depth and binocular disparity to depth from diplopic images},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The role of monocular occlusion in the construction of three-dimensional surfaces.\n \n \n \n\n\n \n Tsirlin, I., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In CVR 2009: Vision in 3D Environments, MO-15. 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tsirlin:2009ya,\n\tauthor = {Tsirlin, I. and Wilcox, L.M. and Allison, R.S.},\n\tbooktitle = {CVR 2009: Vision in 3D Environments, MO-15},\n\tdate-added = {2011-05-09 11:03:59 -0400},\n\tdate-modified = {2011-05-18 16:21:27 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {The role of monocular occlusion in the construction of three-dimensional surfaces},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluating Visual/Motor Coupling in Fish-Tank VR.\n \n \n \n \n\n\n \n Teather, R., Allison, R., & Stuerzlinger, W.\n\n\n \n\n\n\n In CVR 2009: Vision in 3D Environments, MU-3. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Evaluating-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Teather:2009ug,\n\tauthor = {Teather, R. and Allison, R.S. and Stuerzlinger, W.},\n\tbooktitle = {CVR 2009: Vision in 3D Environments, MU-3},\n\tdate-added = {2011-05-09 11:01:55 -0400},\n\tdate-modified = {2011-05-18 15:54:50 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Evaluating Visual/Motor Coupling in Fish-Tank VR},\n\turl-1 = {http://www.cse.yorku.ca/~wolfgang/papers/colocateddisjoint.pdf},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The impact of a limited field of view on active search and spatial memory.\n \n \n \n\n\n \n Guterman, P., Allison, R., Jennings, S., Craig, G., Parush, A., Gauthier, M., & Macuda, T.\n\n\n \n\n\n\n In CVR 2009: Vision in 3D Environments, TI-8. 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2009er,\n\tauthor = {Guterman, P. and Allison, R.S. and Jennings, S. and Craig, G. and Parush, A. and Gauthier, M. and Macuda, T.},\n\tbooktitle = {CVR 2009: Vision in 3D Environments, TI-8},\n\tdate-added = {2011-05-09 10:58:01 -0400},\n\tdate-modified = {2011-05-18 16:20:29 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {The impact of a limited field of view on active search and spatial memory},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficacy and User Acceptance of Computer Gaming Paradigms for Vision Training.\n \n \n \n \n\n\n \n Herriot, C., Irving, E., Carvelho, T., & Allison, R.\n\n\n \n\n\n\n In Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting, volume 50. May 3rd-7th 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Efficacy-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Herriot:2009qz,\n\tabstract = {Introduction: Convergence insufficiency is a condition in which the eyes are unable to make coordinated convergence movements to near objects. It is a common condition with a prevalence as high as 17.6\\% reported in clinical settings (Rouse et al., 2008). Patients with symptoms of headaches and diplopia are often prescribed eye exercises to train their oculomotor coordination; however traditional forms of the exercises are often tedious, leading to poor patient compliance (Gallaway et al., 2002). The purpose of this study was to investigate the efficacy and user acceptance of game-based vision training in comparison to traditional methods of vision training for treatment of convergence insufficiency.\n\nMethods:Twelve participants with convergence insufficiency and six without were asked to play a three-dimensional version of Pac-Man using a stereoscope to fuse two separate images. As a participant improved, the convergence demand increased and operant conditioning paradigms were used to keep the participant motivated. Three other participants with convergence insufficiency were asked to perform fusional vergence training with vectograms and pencil push-ups. Training lasted two weeks, with measurements of binocularity taken at the initial, 1-week, and final appointments. Participants completed a visual symptom questionnaire prior to their training and both a visual symptom questionnaire and an acceptance questionnaire after completion.\n\nResults:Both groups with convergence insufficiency had similar improvement in near point of convergence, positive fusional vergence, and reports of eye strain; however, participants in the game-based vision training group were more likely to rate the training as fun and motivating than participants assigned to traditional vision training. The group without convergence insufficiency showed little improvement in near point of convergence and positive fusional vergence but also reported that game-based vision training was fun and motivating.\n\nConclusion: Computer gaming based vision therapy is more stimulating than traditional methods of vision training. We expect this will translate into greater compliance and improved outcome for patients with convergence insufficiency.\n\nKeywords: clinical (human) or epidemiologic studies: treatment/prevention assessment/controlled clinical trials * binocular vision/stereopsis},\n\tauthor = {Herriot, C.G. and Irving, E.L. and Carvelho, T. and Allison, R.S.},\n\tbooktitle = {Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting},\n\tdate-added = {2011-05-06 15:51:30 -0400},\n\tdate-modified = {2014-02-03 14:43:29 +0000},\n\tkeywords = {Gaming for Vision Therapy},\n\tmonth = {May 3rd-7th},\n\torganization = {Association for Research in Vision and Ophthamology},\n\ttitle = {Efficacy and User Acceptance of Computer Gaming Paradigms for Vision Training},\n\turl-1 = {http://abstracts.iovs.org//cgi/content/abstract/50/5/3827?sid=dcb140e7-233c-4631-8dd7-b4638b854af6},\n\tvolume = {50},\n\tyear = {2009}}\n\n
\n
\n\n\n
\n Introduction: Convergence insufficiency is a condition in which the eyes are unable to make coordinated convergence movements to near objects. It is a common condition with a prevalence as high as 17.6% reported in clinical settings (Rouse et al., 2008). Patients with symptoms of headaches and diplopia are often prescribed eye exercises to train their oculomotor coordination; however traditional forms of the exercises are often tedious, leading to poor patient compliance (Gallaway et al., 2002). The purpose of this study was to investigate the efficacy and user acceptance of game-based vision training in comparison to traditional methods of vision training for treatment of convergence insufficiency. Methods:Twelve participants with convergence insufficiency and six without were asked to play a three-dimensional version of Pac-Man using a stereoscope to fuse two separate images. As a participant improved, the convergence demand increased and operant conditioning paradigms were used to keep the participant motivated. Three other participants with convergence insufficiency were asked to perform fusional vergence training with vectograms and pencil push-ups. Training lasted two weeks, with measurements of binocularity taken at the initial, 1-week, and final appointments. Participants completed a visual symptom questionnaire prior to their training and both a visual symptom questionnaire and an acceptance questionnaire after completion. Results:Both groups with convergence insufficiency had similar improvement in near point of convergence, positive fusional vergence, and reports of eye strain; however, participants in the game-based vision training group were more likely to rate the training as fun and motivating than participants assigned to traditional vision training. The group without convergence insufficiency showed little improvement in near point of convergence and positive fusional vergence but also reported that game-based vision training was fun and motivating. Conclusion: Computer gaming based vision therapy is more stimulating than traditional methods of vision training. We expect this will translate into greater compliance and improved outcome for patients with convergence insufficiency. Keywords: clinical (human) or epidemiologic studies: treatment/prevention assessment/controlled clinical trials * binocular vision/stereopsis\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monovision: Consequences for Depth Perception From Fine and Coarse Disparities.\n \n \n \n \n\n\n \n Smith, C., Wilcox, L., Allison, R., Karanovic, O., & Wilkinson, F\n\n\n \n\n\n\n In Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting, volume 50. May 3rd-7th 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Monovision:-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Smith:2009uz,\n\tabstract = {Purpose:Traditionally presbyopia is treated using corrective bifocal or multifocal lenses. An alternative is to correct one eye for near and the other for distance with a method known as ``monovision''. It is known that differential interocular blur can degrade stereoacuity, and recent studies have confirmed that monovision treatment increases stereoacuity thresholds. However, stereoacuity tests do not assess disparity sensitivity in the coarse range. Given the proven link between stereopsis and stability, we have measured the short-term effects of induced monovision on stereopsis over a broad range of fine (fused) and coarse (diplopic) disparities at both near and far viewing distances.\n\nMethods:Stimuli were presented dichoptically using a time-sequential polarized stereoscopic display. During each trial a line was presented for 300 ms with either crossed or uncrossed disparity above a zero disparity fixation cross. Participants indicated the direction of the depth offset. In one session baseline performance was assessed with optimal optical correction. In another, monovision was induced by adding -1D and +1D lenses in front of the dominant and non-dominant eyes respectively. We assessed performance at distances of 62 and 300 cm in counterbalanced blocks. Within each block, the stimuli were presented at 5 fine disparities ranging from 60 to 2400 arcsec and 5 coarse disparities ranging from 1o to 3.5o.\n\nResults:Induced monovision resulted in decreased accuracy relative to baseline in the fine disparity range, but effects were minimal in the coarse range. Monovision also had a larger impact on performance at a viewing distance of 300 cm than at 62 cm.\n\nConclusions:Induced monovision not only increases stereoacuity thresholds, but degrades depth discrimination across the range of fusable disparities in young observers. This effect on fine disparity is accentuated at larger viewing distances typical of fixation distances during walking, suggesting that stability during locomotion may be degraded. However, we also found that coarse stereopsis was relatively spared, and this may offset the observed losses.\n\nKeywords: binocular vision/stereopsis * presbyopia},\n\tauthor = {Smith, C.E. and Wilcox, L.M. and Allison, R.S. and Karanovic, O. and Wilkinson, F},\n\tbooktitle = {Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting},\n\tdate-added = {2011-05-06 15:12:38 -0400},\n\tdate-modified = {2014-02-03 14:44:57 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {May 3rd-7th},\n\torganization = {Association for Research in Vision and Ophthamology},\n\ttitle = {Monovision: Consequences for Depth Perception From Fine and Coarse Disparities},\n\turl-1 = {http://abstracts.iovs.org//cgi/content/abstract/50/5/2887?sid=be049216-08e0-4c83-834b-c1c973f7dca9},\n\tvolume = {50},\n\tyear = {2009}}\n\n
\n
\n\n\n
\n Purpose:Traditionally presbyopia is treated using corrective bifocal or multifocal lenses. An alternative is to correct one eye for near and the other for distance with a method known as ``monovision''. It is known that differential interocular blur can degrade stereoacuity, and recent studies have confirmed that monovision treatment increases stereoacuity thresholds. However, stereoacuity tests do not assess disparity sensitivity in the coarse range. Given the proven link between stereopsis and stability, we have measured the short-term effects of induced monovision on stereopsis over a broad range of fine (fused) and coarse (diplopic) disparities at both near and far viewing distances. Methods:Stimuli were presented dichoptically using a time-sequential polarized stereoscopic display. During each trial a line was presented for 300 ms with either crossed or uncrossed disparity above a zero disparity fixation cross. Participants indicated the direction of the depth offset. In one session baseline performance was assessed with optimal optical correction. In another, monovision was induced by adding -1D and +1D lenses in front of the dominant and non-dominant eyes respectively. We assessed performance at distances of 62 and 300 cm in counterbalanced blocks. Within each block, the stimuli were presented at 5 fine disparities ranging from 60 to 2400 arcsec and 5 coarse disparities ranging from 1o to 3.5o. Results:Induced monovision resulted in decreased accuracy relative to baseline in the fine disparity range, but effects were minimal in the coarse range. Monovision also had a larger impact on performance at a viewing distance of 300 cm than at 62 cm. Conclusions:Induced monovision not only increases stereoacuity thresholds, but degrades depth discrimination across the range of fusable disparities in young observers. This effect on fine disparity is accentuated at larger viewing distances typical of fixation distances during walking, suggesting that stability during locomotion may be degraded. However, we also found that coarse stereopsis was relatively spared, and this may offset the observed losses. Keywords: binocular vision/stereopsis * presbyopia\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of starting height, lighting and runway length on glideslope control and landing quality.\n \n \n \n \n\n\n \n Ash, A., Palmisano, S., Kim, J., & Allison, R.\n\n\n \n\n\n\n In Combined Abstracts of 2009 Australian Psychology Conferences: The abstracts of the 36th Australasian Experimental Psychology Conference, pages 3-4. Melbourne, Australia, 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Ash:2009km,\n\tabstract = {We examined the effects of starting altitude, scene lighting and runway length on glideslope control and touchdown during simulated flight. Glideslope misperception is common during aircraft landings, especially when visibility is reduced. It is therefore important to measure the glideslope control errors generated by such misperceptions and determine whether they can be adequately compensated for. Fixed-wing aircraft landings were simulated under day or night lighting conditions, with pilots starting their final approach either ``too high'', ``too low'' or already on the desired 3 degree glideslope. Eleven private and six\nstudent pilots actively controlled these simulated landings until they touched down on one of two runways (either 30 m x 1331 m or 30 m x 1819 m). Both student and private pilots were poor at compensating for approaches that started ``too high'' or ``too low'', particularly at night. However, they were able to adjust for\nthese glideslope control errors prior to touchdown via the proper and appropriate execution of the landing flare. While private pilots were no more accurate than students during the glideslope control phase, they typically executed the safest and smoothest landings. Application: This study suggests that flight simulation could be useful in training student pilots to carry out safe landings via the appropriate execution of the landing flare.},\n\taddress = {Melbourne, Australia},\n\tauthor = {Ash, A. and Palmisano, S. and Kim, J. and Allison, R.},\n\tbooktitle = {Combined Abstracts of 2009 Australian Psychology Conferences: The abstracts of the 36th Australasian Experimental Psychology Conference},\n\tdate-added = {2011-05-06 15:05:24 -0400},\n\tdate-modified = {2011-05-22 13:36:45 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\torganization = {The Australian Psychological Society},\n\tpages = {3-4},\n\ttitle = {Effects of starting height, lighting and runway length on glideslope control and landing quality},\n\turl-1 = {https://misprd.uow.edu.au/ris_public/WebObjects/RISPublic.woa/wo/2.0.12.1.13.3.3.1;jsessionid=7AACA9D9670B94B7BD8D879386367DF0},\n\tyear = {2009}}\n\n
\n
\n\n\n
\n We examined the effects of starting altitude, scene lighting and runway length on glideslope control and touchdown during simulated flight. Glideslope misperception is common during aircraft landings, especially when visibility is reduced. It is therefore important to measure the glideslope control errors generated by such misperceptions and determine whether they can be adequately compensated for. Fixed-wing aircraft landings were simulated under day or night lighting conditions, with pilots starting their final approach either ``too high'', ``too low'' or already on the desired 3 degree glideslope. Eleven private and six student pilots actively controlled these simulated landings until they touched down on one of two runways (either 30 m x 1331 m or 30 m x 1819 m). Both student and private pilots were poor at compensating for approaches that started ``too high'' or ``too low'', particularly at night. However, they were able to adjust for these glideslope control errors prior to touchdown via the proper and appropriate execution of the landing flare. While private pilots were no more accurate than students during the glideslope control phase, they typically executed the safest and smoothest landings. Application: This study suggests that flight simulation could be useful in training student pilots to carry out safe landings via the appropriate execution of the landing flare.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Where do pilots look when they land?.\n \n \n \n \n\n\n \n Palmisano, S. A., Kim, J., Ash, A., & Allison, R.\n\n\n \n\n\n\n In Combined abstracts of 2009 Australian psychology conferences: The abstracts of the 36th Australasian Experimental Psychology Conference, pages 43. Melbourne, 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Where-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2009bl,\n\taddress = {Melbourne},\n\tauthor = {Palmisano, S. A. and Kim, J. and Ash, A. and Allison, R.},\n\tbooktitle = {Combined abstracts of 2009 Australian psychology conferences: The abstracts of the 36th Australasian Experimental Psychology Conference},\n\tdate-added = {2011-05-06 15:00:44 -0400},\n\tdate-modified = {2011-05-18 16:32:02 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\torganization = {Australian Psychological Society},\n\tpages = {43},\n\ttitle = {Where do pilots look when they land?},\n\turl-1 = {https://misprd.uow.edu.au/ris_public/WebObjects/RISPublic.woa/wo/2.0.12.1.13.1.3.1;jsessionid=7AACA9D9670B94B7BD8D879386367DF0},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Natural and Enhanced Visual Perception During Flight.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n In American Psychological Association 117th Annual Convention. Toronto, Canada, August 6th-9th 2009.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2009ay,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {American Psychological Association 117th Annual Convention},\n\tdate-added = {2011-05-06 14:58:04 -0400},\n\tdate-modified = {2011-05-11 13:09:39 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {August 6th-9th},\n\ttitle = {Natural and Enhanced Visual Perception During Flight},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual asymmetry in stereo-transparency: The role of disparity interpolation.\n \n \n \n \n\n\n \n Wilcox, L. M., Tsirlin, I., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision, volume 9, pages 286-286. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2009286-286,\n\tabstract = {We have previously described a perceptual asymmetry that occurs when viewing pseudo-transparent random element stereograms. That is, the minimum separation in depth needed to segregate two overlaid surfaces in a random-element stereogram depends on the distribution of elements across the surfaces. With the total element density fixed, significantly larger inter-plane disparities are required for perceptual segregation of overlaid surfaces when the front surface has fewer elements than the back surface than vice versa. In the experiments described here we test the hypothesis that this perceptual asymmetry reflects a fundamental difference in signal strength for the front and back surfaces which results from disparity interpolation. That is, we propose that the blank regions between elements are assigned to the back plane, making it appear opaque. We tested this hypothesis in a series of experiments and find that: i) the total element density in the stimulus does not affect the asymmetry ii) the perceived relative density of the two surfaces shows a similar asymmetry iii) manipulations favouring perceptual assignment of the spaces into surfaces other than the two overlaid element surfaces reduces the asymmetry. We propose that the interpolation of the spaces between the elements defining the surfaces is mediated by a network of inter-neural connections; excitatory within-disparity, and inhibitory across disparity. Our data suggest that the strength of the inhibitory connections is modulated according to mid-level figure ground assignment. We are using our psychophysical results to inform the development of a computational model of this network.},\n\tauthor = {Wilcox, Laurie M. and Tsirlin, Inna and Allison, Robert S.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 19:02:52 -0400},\n\tdoi = {10.1167/9.8.286},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {8},\n\tpages = {286-286},\n\ttitle = {Perceptual asymmetry in stereo-transparency: The role of disparity interpolation},\n\turl-1 = {http://dx.doi.org/10.1167/9.8.286},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.8.286}}\n\n
\n
\n\n\n
\n We have previously described a perceptual asymmetry that occurs when viewing pseudo-transparent random element stereograms. That is, the minimum separation in depth needed to segregate two overlaid surfaces in a random-element stereogram depends on the distribution of elements across the surfaces. With the total element density fixed, significantly larger inter-plane disparities are required for perceptual segregation of overlaid surfaces when the front surface has fewer elements than the back surface than vice versa. In the experiments described here we test the hypothesis that this perceptual asymmetry reflects a fundamental difference in signal strength for the front and back surfaces which results from disparity interpolation. That is, we propose that the blank regions between elements are assigned to the back plane, making it appear opaque. We tested this hypothesis in a series of experiments and find that: i) the total element density in the stimulus does not affect the asymmetry ii) the perceived relative density of the two surfaces shows a similar asymmetry iii) manipulations favouring perceptual assignment of the spaces into surfaces other than the two overlaid element surfaces reduces the asymmetry. We propose that the interpolation of the spaces between the elements defining the surfaces is mediated by a network of inter-neural connections; excitatory within-disparity, and inhibitory across disparity. Our data suggest that the strength of the inhibitory connections is modulated according to mid-level figure ground assignment. We are using our psychophysical results to inform the development of a computational model of this network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identifying discontinuities in depth: A role for monocular occlusions.\n \n \n \n \n\n\n \n Tsirlin, I., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In Journal of Vision, volume 9, pages 277-277. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Identifying-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2009277-277,\n\tabstract = {It is well established that monocular regions arising from occlusion of one object by another contribute to stereoscopic depth perception. However, the exact role of monocular occlusions in 3D scene perception remains unclear. One possibility is that monocular occlusions define object boundaries or discontinuities in depth. This is an attractive possibility, but to date it has not been tested empirically. Here we describe a series of experiments that directly test this hypothesis. Our novel stereoscopic stimulus consists of a foreground rectangular region set against a random-dot background positioned at zero disparity. One side of the foreground region is filled with a random-dot texture shifted towards the observer in apparent depth. The remaining area of the foreground is blank and carries no disparity information. In several experiments, we vary the presence or absence and the width of occluded areas at the border of the central blank area and the background texture. Our data show that the presence of occluded elements on the boundary of the blank area dramatically influences the perceived shape of the foreground region. If there are no occluded elements, the foreground appears to contain a depth step, as the blank area lies at the depth of the zero disparity border. When occluded elements are added, the blank region is seen vividly at the same depth as the texture, so that the foreground is perceived as a single opaque planar surface. We show that the depth perceived via occlusion is not due to the presence of binocular disparity at the boundary, and that it is qualitative, not quantitative in nature. Taken together, our experiments provide strong support for the hypothesis that monocular occlusion zones are important signals for the presence and location of depth discontinuities.},\n\tauthor = {Tsirlin, Inna and Wilcox, Laurie and Allison, Robert},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 19:02:04 -0400},\n\tdoi = {10.1167/9.8.277},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {8},\n\tpages = {277-277},\n\ttitle = {Identifying discontinuities in depth: A role for monocular occlusions},\n\turl-1 = {http://dx.doi.org/10.1167/9.8.277},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.8.277}}\n\n
\n
\n\n\n
\n It is well established that monocular regions arising from occlusion of one object by another contribute to stereoscopic depth perception. However, the exact role of monocular occlusions in 3D scene perception remains unclear. One possibility is that monocular occlusions define object boundaries or discontinuities in depth. This is an attractive possibility, but to date it has not been tested empirically. Here we describe a series of experiments that directly test this hypothesis. Our novel stereoscopic stimulus consists of a foreground rectangular region set against a random-dot background positioned at zero disparity. One side of the foreground region is filled with a random-dot texture shifted towards the observer in apparent depth. The remaining area of the foreground is blank and carries no disparity information. In several experiments, we vary the presence or absence and the width of occluded areas at the border of the central blank area and the background texture. Our data show that the presence of occluded elements on the boundary of the blank area dramatically influences the perceived shape of the foreground region. If there are no occluded elements, the foreground appears to contain a depth step, as the blank area lies at the depth of the zero disparity border. When occluded elements are added, the blank region is seen vividly at the same depth as the texture, so that the foreground is perceived as a single opaque planar surface. We show that the depth perceived via occlusion is not due to the presence of binocular disparity at the boundary, and that it is qualitative, not quantitative in nature. Taken together, our experiments provide strong support for the hypothesis that monocular occlusion zones are important signals for the presence and location of depth discontinuities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The outer limits: How limiting the field of view impacts navigation and spatial memory.\n \n \n \n \n\n\n \n Guterman, P. S., Allison, R. S., Jennings, S., Craig, G., Parush, A., Gauthier, M., & Macuda, T.\n\n\n \n\n\n\n In Journal of Vision, volume 9, pages 1137-1137. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison20091137-1137,\n\tabstract = {Many optical devices limit the amount of the visual field that can be seen at any one time. Here we examine how these limits on Field of View (FoV) impair the ability to integrate visual information and make navigational decisions. Participants wore field-restricting goggles with separate groups fitted with either a $40^{\\circ}$ or $90^{\\circ}$ horizontal FoV. Subjects actively explored a maze-like environment over the course of 12 search trials. For each search trial, subjects were given a specific target and asked to find it as quickly as possible. The time and path walked to the targets were recorded on paper. Between each trial subjects were blindfolded and led to a new location in the environment. After the search trials, they completed a set of spatial memory tasks that included sketching a map of the search area, and judging the relative direction of and distances between objects. Search performance was measured by average walking speed, which was determined by dividing the path length by the search time for each trial. Participants with the narrower FoV walked significantly slower to the targets, but they increased their speed over time. Independent raters, who judged the sketch maps on layout, scale, and geometry showed a significant preference for the maps of the wide FoV group over the narrow FoV group. However, there was no effect of FoV for the relative direction and distance estimation task indicating a limited impact on the memory of locations of objects in the environment. In contrast, the results suggest that FoV restriction has a significant impact on the spatial representation of the layout of one's environment that needs to be considered in the design and use of devices that augment or enhance vision.},\n\tauthor = {Guterman, Pearl S. and Allison, Robert S. and Jennings, Sion and Craig, Greg and Parush, Avi and Gauthier, Michelle and Macuda, Todd},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:52:04 -0400},\n\tdoi = {10.1167/9.8.1137},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {8},\n\tpages = {1137-1137},\n\ttitle = {The outer limits: How limiting the field of view impacts navigation and spatial memory},\n\turl-1 = {http://dx.doi.org/10.1167/9.8.1137},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.8.1137}}\n\n
\n
\n\n\n
\n Many optical devices limit the amount of the visual field that can be seen at any one time. Here we examine how these limits on Field of View (FoV) impair the ability to integrate visual information and make navigational decisions. Participants wore field-restricting goggles with separate groups fitted with either a $40^{∘}$ or $90^{∘}$ horizontal FoV. Subjects actively explored a maze-like environment over the course of 12 search trials. For each search trial, subjects were given a specific target and asked to find it as quickly as possible. The time and path walked to the targets were recorded on paper. Between each trial subjects were blindfolded and led to a new location in the environment. After the search trials, they completed a set of spatial memory tasks that included sketching a map of the search area, and judging the relative direction of and distances between objects. Search performance was measured by average walking speed, which was determined by dividing the path length by the search time for each trial. Participants with the narrower FoV walked significantly slower to the targets, but they increased their speed over time. Independent raters, who judged the sketch maps on layout, scale, and geometry showed a significant preference for the maps of the wide FoV group over the narrow FoV group. However, there was no effect of FoV for the relative direction and distance estimation task indicating a limited impact on the memory of locations of objects in the environment. In contrast, the results suggest that FoV restriction has a significant impact on the spatial representation of the layout of one's environment that needs to be considered in the design and use of devices that augment or enhance vision.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contributions of vergence, looming, and relative disparity to the perception of motion in depth.\n \n \n \n \n\n\n \n Fukuda, K., Howard, I. P., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision, volume 9, pages 631-631. 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Contributions-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2009631-631,\n\tabstract = {It is known that modulations of absolute binocular disparity of a textured surface do not create a sensation of motion in depth (MID) when the image does not change size (loom). We reported previously that modulations of disparity do create some MID in a surface containing a radial pattern that lacks a looming signal when it moves in depth. We have built an instrument that allows us to independently control looming, changing absolute disparity (vergence), and changing relative disparity of surfaces actually moving in depth. A textured surface and a surface with a radial pattern moved back and forth in depth between 40 cm and 70 cm. With monocular viewing, looming created MID of the textured display but not of the radial display. Modulation of absolute disparity (vergence) produced no MID of the textured display but some MID of the radial display. When modulation of absolute disparity was increased relative to looming, MID was increased for both displays. When disparity modulation and looming were in conflict, MID decreased for both stimuli. These results indicate cue summation. Superimposition of a stationary reference stimulus that provided changing relative disparity, generally increased MID for both stimuli. Addition of the reference stimulus to the radial display with reversed vergence produced MID in accordance with the vergence signal. Addition of the reference stimulus to the patterned display with vergence reversed relative to looming, produced a paradoxical effect. The textured display appeared to move simultaneously in opposite directions. When it appeared to move forward relative to the observer, it appeared to move backward relative to the stationary reference stimulus. This indicates strong cue dissociation. We will demonstrate this unique paradoxical effect.},\n\tauthor = {Fukuda, Kazuho and Howard, Ian P. and Allison, Robert S.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:51:29 -0400},\n\tdoi = {10.1167/9.8.631},\n\tjournal = {Journal of Vision},\n\tkeywords = {Motion in depth},\n\tnumber = {8},\n\tpages = {631-631},\n\ttitle = {Contributions of vergence, looming, and relative disparity to the perception of motion in depth},\n\turl-1 = {http://dx.doi.org/10.1167/9.8.631},\n\tvolume = {9},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1167/9.8.631}}\n\n
\n
\n\n\n
\n It is known that modulations of absolute binocular disparity of a textured surface do not create a sensation of motion in depth (MID) when the image does not change size (loom). We reported previously that modulations of disparity do create some MID in a surface containing a radial pattern that lacks a looming signal when it moves in depth. We have built an instrument that allows us to independently control looming, changing absolute disparity (vergence), and changing relative disparity of surfaces actually moving in depth. A textured surface and a surface with a radial pattern moved back and forth in depth between 40 cm and 70 cm. With monocular viewing, looming created MID of the textured display but not of the radial display. Modulation of absolute disparity (vergence) produced no MID of the textured display but some MID of the radial display. When modulation of absolute disparity was increased relative to looming, MID was increased for both displays. When disparity modulation and looming were in conflict, MID decreased for both stimuli. These results indicate cue summation. Superimposition of a stationary reference stimulus that provided changing relative disparity, generally increased MID for both stimuli. Addition of the reference stimulus to the radial display with reversed vergence produced MID in accordance with the vergence signal. Addition of the reference stimulus to the patterned display with vergence reversed relative to looming, produced a paradoxical effect. The textured display appeared to move simultaneously in opposite directions. When it appeared to move forward relative to the observer, it appeared to move backward relative to the stationary reference stimulus. This indicates strong cue dissociation. We will demonstrate this unique paradoxical effect.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Glideslope perception during aircraft landing.\n \n \n \n \n\n\n \n Murray, R., Allison, R., & Palmisano, S. A.\n\n\n \n\n\n\n In Leigh, E., editor(s), SimTect 2009 Conference Proceedings, pages 87-91, Lindfield, Australia, 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Glideslope-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Murray:2009jf,\n\taddress = {Lindfield, Australia},\n\tauthor = {Murray, R. and Allison, R.S. and Palmisano, S. A.},\n\tbooktitle = {SimTect 2009 Conference Proceedings},\n\tdate-added = {2011-05-06 13:08:13 -0400},\n\tdate-modified = {2011-05-18 15:57:52 -0400},\n\teditor = {E. Leigh},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {87-91},\n\tread = {0},\n\ttitle = {Glideslope perception during aircraft landing},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/murray 2009 simtec.pdf},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The application of circular statistics to psychophysical research.\n \n \n \n \n\n\n \n Guterman, P., Allison, R., & McCague, H.\n\n\n \n\n\n\n In Proceedings of the 25th Annual Meeting of the International Society for Psychophysics, Galway, Ireland, October 21st-24th 2009. \n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Guterman:2009ov,\n\taddress = {Galway, Ireland},\n\tauthor = {Guterman, P. and Allison, R.S. and McCague, H.},\n\tbooktitle = {Proceedings of the 25th Annual Meeting of the International Society for Psychophysics},\n\tdate-added = {2011-05-06 12:45:21 -0400},\n\tdate-modified = {2011-05-18 16:17:55 -0400},\n\tkeywords = {Misc.},\n\tmonth = {October 21st-24th},\n\ttitle = {The application of circular statistics to psychophysical research},\n\turl-1 = {http://yorku.academia.edu/PearlGuterman/Papers/159955/The_application_of_circular_statistics_to_psychophysical_research},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparing Coupled and Decoupled Input/Display Spaces in Fish-Tank VR.\n \n \n \n \n\n\n \n Teather, R, Allison, R., & Stuerzlinger, W.\n\n\n \n\n\n\n In IEEE Toronto International Conference - Science and Technology for Humanity, pages 624-629, Toronto, Canada, 2009. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Comparing-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2009624-629,\n\taddress = {Toronto, Canada},\n\tauthor = {Teather, R and Allison, R.S. and Stuerzlinger, W.},\n\tbooktitle = {{IEEE} Toronto International Conference - Science and Technology for Humanity},\n\tdate-modified = {2011-05-11 13:23:56 -0400},\n\tdoi = {10.1109/TIC-STH.2009.5444423},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {624-629},\n\tpublisher = {IEEE},\n\ttitle = {Comparing Coupled and Decoupled Input/Display Spaces in Fish-Tank VR},\n\turl-1 = {http://dx.doi.org/10.1109/TIC-STH.2009.5444423},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1109/TIC-STH.2009.5444423}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Probability Grid Mapping System for Aerial Search.\n \n \n \n \n\n\n \n Shabaneh, M., Merei, A., Jennings, S., & Allison, R.\n\n\n \n\n\n\n In IEEE TIC-STH 09: 2009 IEEE Toronto International Conference: Science and Technology for Humanity, pages 521-526, New York, 2009. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Probability-1\n  \n \n \n \"Probability-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2009521-526,\n\tabstract = {Aerial search for targets on the ground is a challenging task and success depends on providing proper intelligence to the searchers. Recent advances in avionics enhanced and synthetic vision systems (ESVS) offer new opportunities to present this information to aircrew. This paper describes the concept and implementation of a new ESVS technique intended to support flight crews in aerial search for search and rescue missions and other guided search scenarios. Most enhanced vision systems for aviation have targeted the pilot in order to support flight and navigation tasks. The Probability Grid Mapping system (PGM) is unique in that it aims to improve the effectiveness of the other officer in the aircraft who is managing and performing the tactical mission. The PGM provides the searcher with an augmented, conformal, digital moving map of the search area that encodes the estimated probability of the target being found in various locations. A priori estimation of these probabilities allows for prioritization of search areas, reduces search duplication and improves coverage and ideally maximizes search effectiveness. The conformal 3D map is displayed with appropriate perspective projection using a head-slaved optical see-through head-mounted display allowing it to be registered with and augment the real world. To evaluate the system prior to flight test, a simulation environment was developed for study of the effectiveness of highlighting methods, update strategies, and probability mapping methods.},\n\taddress = {New York},\n\tauthor = {Shabaneh, M. and Merei, A. and Jennings, S. and Allison, R.S.},\n\tbooktitle = {IEEE TIC-STH 09: 2009 IEEE Toronto International Conference: Science and Technology for Humanity},\n\tdate-modified = {2011-09-12 22:44:49 -0400},\n\tdoi = {10.1109/TIC-STH.2009.5444443},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {521-526},\n\tpublisher = {IEEE},\n\ttitle = {Probability Grid Mapping System for Aerial Search},\n\turl-1 = {http://dx.doi.org/10.1109/TIC-STH.2009.5444443},\n\turl-2 = {http://dx.doi.org/10.1109/TIC-STH.2009.5444443},\n\tyear = {2009},\n\turl-1 = {https://doi.org/10.1109/TIC-STH.2009.5444443}}\n\n
\n
\n\n\n
\n Aerial search for targets on the ground is a challenging task and success depends on providing proper intelligence to the searchers. Recent advances in avionics enhanced and synthetic vision systems (ESVS) offer new opportunities to present this information to aircrew. This paper describes the concept and implementation of a new ESVS technique intended to support flight crews in aerial search for search and rescue missions and other guided search scenarios. Most enhanced vision systems for aviation have targeted the pilot in order to support flight and navigation tasks. The Probability Grid Mapping system (PGM) is unique in that it aims to improve the effectiveness of the other officer in the aircraft who is managing and performing the tactical mission. The PGM provides the searcher with an augmented, conformal, digital moving map of the search area that encodes the estimated probability of the target being found in various locations. A priori estimation of these probabilities allows for prioritization of search areas, reduces search duplication and improves coverage and ideally maximizes search effectiveness. The conformal 3D map is displayed with appropriate perspective projection using a head-slaved optical see-through head-mounted display allowing it to be registered with and augment the real world. To evaluate the system prior to flight test, a simulation environment was developed for study of the effectiveness of highlighting methods, update strategies, and probability mapping methods.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Report on Selected Issues Related to NVG Use in a Canadian Security Context, Technical Report CSE-2009-07.\n \n \n \n \n\n\n \n Shabaneh, M., Guterman, P., Zacher, J., Sakano, Y., & Allison, R.\n\n\n \n\n\n\n Technical Report York University, Toronto, Canada, 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Report-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Shabaneh:2009dt,\n\taddress = {Toronto, Canada},\n\tauthor = {Shabaneh, M. and Guterman, P. and Zacher, J. and Sakano, Y. and Allison, R.S.},\n\tdate-added = {2019-02-03 10:26:47 -0500},\n\tdate-modified = {2019-02-03 10:26:47 -0500},\n\tinstitution = {York University},\n\tkeywords = {Night Vision},\n\ttitle = {Report on Selected Issues Related to NVG Use in a Canadian Security Context, Technical Report CSE-2009-07},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Report on Selected Issues Related to NVG Use in a Canadian Security Context.pdf},\n\tyear = {2009}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2008\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Active gaze, visual look-ahead, and locomotor control.\n \n \n \n \n\n\n \n Wilkie, R. M., Wann, J. P., & Allison, R.\n\n\n \n\n\n\n J Exp Psychol Hum Percept Perform, 34(5): 1150-64. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Active-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20081150-64,\n\tabstract = {The authors examined observers steering through a series of obstacles to determine the role of active gaze in shaping locomotor trajectories. Participants sat on a bicycle trainer integrated with a large field-of-view simulator and steered through a series of slalom gates. Steering behavior was determined by examining the passing distance through gates and the smoothness of trajectory. Gaze monitoring revealed which slalom targets were fixated and for how long. Participants tended to track the most immediate gate until it was about 1.5 s away, at which point gaze switched to the next slalom gate. To probe this gaze pattern, the authors then introduced a number of experimental conditions that placed spatial or temporal constraints on where participants could look and when. These manipulations resulted in systematic steering errors when observers were forced to use unnatural looking patterns, but errors were reduced when peripheral monitoring of obstacles was allowed. A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.},\n\tauthor = {Wilkie, R. M. and Wann, J. P. and Allison, R.S.},\n\tdate-modified = {2011-05-11 13:10:57 -0400},\n\tdoi = {10.1037/0096-1523.34.5.1150},\n\tjournal = {J Exp Psychol Hum Percept Perform},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {5},\n\tpages = {1150-64},\n\ttitle = {Active gaze, visual look-ahead, and locomotor control},\n\turl-1 = {http://dx.doi.org/10.1037/0096-1523.34.5.1150},\n\tvolume = {34},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1037/0096-1523.34.5.1150}}\n\n
\n
\n\n\n
\n The authors examined observers steering through a series of obstacles to determine the role of active gaze in shaping locomotor trajectories. Participants sat on a bicycle trainer integrated with a large field-of-view simulator and steered through a series of slalom gates. Steering behavior was determined by examining the passing distance through gates and the smoothness of trajectory. Gaze monitoring revealed which slalom targets were fixated and for how long. Participants tended to track the most immediate gate until it was about 1.5 s away, at which point gaze switched to the next slalom gate. To probe this gaze pattern, the authors then introduced a number of experimental conditions that placed spatial or temporal constraints on where participants could look and when. These manipulations resulted in systematic steering errors when observers were forced to use unnatural looking patterns, but errors were reduced when peripheral monitoring of obstacles was allowed. A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic transparency: constraints on the perception of multiple surfaces.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R., & Wilcox, L. M.\n\n\n \n\n\n\n Journal of Vision, 8(5 Article 5): 1-10. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20081-10,\n\tabstract = {Stereo-transparency is an intriguing, but not well-understood, phenomenon. In the present experiment, we simultaneously manipulated the number of overlaid planes, density of elements, and depth separation between the planes in random dot stereograms to evaluate the constraints on stereoscopic transparency. We used a novel task involving identification of patterned planes among the planes constituting the stimulus. Our data show that observers are capable of segregating up to six simultaneous overlaid surfaces. Increases in element density or number of planes have a detrimental effect on the transparency percept. The effect of increasing the inter-plane disparity is strongly influenced by other stimulus parameters. This latter result can explain a difference in the literature concerning the role of inter-plane disparity in perception of stereo-transparency. We argue that the effects of stimuli parameters on the transparency percept can be accounted for not only by inhibitory interactions, as has been suggested, but also by the inherent properties of disparity detectors.},\n\tauthor = {Tsirlin, I. and Allison, R.S. and Wilcox, L. M.},\n\tdate-modified = {2012-07-02 19:00:44 -0400},\n\tdoi = {10.1167/8.5.5},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {5 Article 5},\n\tpages = {1-10},\n\ttitle = {Stereoscopic transparency: constraints on the perception of multiple surfaces},\n\turl-1 = {http://dx.doi.org/10.1167/8.5.5},\n\tvolume = {8},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1167/8.5.5}}\n\n
\n
\n\n\n
\n Stereo-transparency is an intriguing, but not well-understood, phenomenon. In the present experiment, we simultaneously manipulated the number of overlaid planes, density of elements, and depth separation between the planes in random dot stereograms to evaluate the constraints on stereoscopic transparency. We used a novel task involving identification of patterned planes among the planes constituting the stimulus. Our data show that observers are capable of segregating up to six simultaneous overlaid surfaces. Increases in element density or number of planes have a detrimental effect on the transparency percept. The effect of increasing the inter-plane disparity is strongly influenced by other stimulus parameters. This latter result can explain a difference in the literature concerning the role of inter-plane disparity in perception of stereo-transparency. We argue that the effects of stimuli parameters on the transparency percept can be accounted for not only by inhibitory interactions, as has been suggested, but also by the inherent properties of disparity detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerating self-motion displays produce more compelling vection in depth.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Pekin, F.\n\n\n \n\n\n\n Perception, 37(1): 22-33. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Accelerating-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison200822-33,\n\tabstract = {We examined the vection in depth induced when simulated random self-accelerations (jitter) and periodic self-accelerations (oscillation) were added to radial expanding optic flow (simulating constant-velocity forward self-motion). Contrary to the predictions of sensory-conflict theory frontal-plane jitter and oscillation were both found to significantly decrease the onsets and increase the speeds of vection in depth. Depth jitter and oscillation had lesser, but still significant, effects on the speed of vection in depth. A control experiment demonstrated that adding global perspective motion which simulated a constant-velocity frontal-plane self-motion had no significant effect on vection in depth induced by the radial component of the optic flow. These results are incompatible with the notion that constant-velocity displays produce optimal vection. Rather, they indicate that displays simulating self-acceleration can often produce more compelling experiences of self-motion in depth.},\n\tauthor = {Palmisano, S. and Allison, R.S. and Pekin, F.},\n\tdate-modified = {2011-05-11 13:15:50 -0400},\n\tjournal = {Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {1},\n\tpages = {22-33},\n\ttitle = {Accelerating self-motion displays produce more compelling vection in depth},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Palmisano,Allison&Pekin.pdf},\n\tvolume = {37},\n\tyear = {2008}}\n\n
\n
\n\n\n
\n We examined the vection in depth induced when simulated random self-accelerations (jitter) and periodic self-accelerations (oscillation) were added to radial expanding optic flow (simulating constant-velocity forward self-motion). Contrary to the predictions of sensory-conflict theory frontal-plane jitter and oscillation were both found to significantly decrease the onsets and increase the speeds of vection in depth. Depth jitter and oscillation had lesser, but still significant, effects on the speed of vection in depth. A control experiment demonstrated that adding global perspective motion which simulated a constant-velocity frontal-plane self-motion had no significant effect on vection in depth induced by the radial component of the optic flow. These results are incompatible with the notion that constant-velocity displays produce optimal vection. Rather, they indicate that displays simulating self-acceleration can often produce more compelling experiences of self-motion in depth.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Influence of relative saccade direction on detection of transsaccadic natural scene transitions.\n \n \n \n \n\n\n \n Sadr, S., Allison, R., Vinnikov, M., & Swierad, D.\n\n\n \n\n\n\n In Vision Sciences Society Annual Conference, Journal of Vision, volume 8, pages 933, 933a. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Influence-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sadr:2008um,\n\tabstract = {Saccadic eye movements are rapid shifts of gaze that direct the fovea, from one point of interest to another. On each saccade, the entire scene streams across the retina at hundreds of degrees per second. However, this streaming is not apparent, due to a reduced visual sensitivity toward motion during saccades. We have observed that when scenes translate transsaccadically (during saccades) they are perceived as moving slower than equivalent sized intersaccadic transitions. We confirmed these findings using a magnitude estimation technique (Sadr, Allison & Vinnikov, ECVP 2007). We further explored the dependence of transsaccadic motion perception on the direction of shift in a 4AFC experiment. We examined the effect of different scene transitions relative to saccade directions both horizontally and vertically, and subjects had to indicate direction of the scene transitions if detected. Subjects sequentially fixated blinking fixation points (20o apart) indicated on each image based on horizontal or vertical saccade direction conditions. We conclude that during saccades, the magnitude of the velocity signal is attenuated as well as its detectability. Furthermore, the extent of saccadic suppression depends on the relative saccade direction and the direction of scene transition. },\n\tauthor = {Sadr, S. and Allison, R.S. and Vinnikov, M. and Swierad, D.},\n\tbooktitle = {Vision Sciences Society Annual Conference, Journal of Vision},\n\tdate-added = {2011-05-06 16:15:13 -0400},\n\tdate-modified = {2012-07-02 18:04:45 -0400},\n\tdoi = {10.1167/8.6.933},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {6},\n\torganization = {Vision Sciences Society},\n\tpages = {933, 933a},\n\ttitle = {Influence of relative saccade direction on detection of transsaccadic natural scene transitions},\n\turl-1 = {http://dx.doi.org/10.1167/8.6.933},\n\tvolume = {8},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1167/8.6.933}}\n\n
\n
\n\n\n
\n Saccadic eye movements are rapid shifts of gaze that direct the fovea, from one point of interest to another. On each saccade, the entire scene streams across the retina at hundreds of degrees per second. However, this streaming is not apparent, due to a reduced visual sensitivity toward motion during saccades. We have observed that when scenes translate transsaccadically (during saccades) they are perceived as moving slower than equivalent sized intersaccadic transitions. We confirmed these findings using a magnitude estimation technique (Sadr, Allison & Vinnikov, ECVP 2007). We further explored the dependence of transsaccadic motion perception on the direction of shift in a 4AFC experiment. We examined the effect of different scene transitions relative to saccade directions both horizontally and vertically, and subjects had to indicate direction of the scene transitions if detected. Subjects sequentially fixated blinking fixation points (20o apart) indicated on each image based on horizontal or vertical saccade direction conditions. We conclude that during saccades, the magnitude of the velocity signal is attenuated as well as its detectability. Furthermore, the extent of saccadic suppression depends on the relative saccade direction and the direction of scene transition. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accommodation and Pupil Responses to Random-dot Stereograms.\n \n \n \n \n\n\n \n Suryakumar, R., & Allison, R.\n\n\n \n\n\n\n In Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting, volume 49. April 27th - May 1st 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Accommodation-1\n  \n \n \n \"Accommodation-2\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Suryakumar:2008er,\n\tabstract = {Accommodation and Pupil Responses to Random-Dot Stereograms\nR. Suryakumar and R. S. Allison\n\nCenter for Vision Research, Computer Science and Engineering, York University, Toronto, Ontario, Canada\n\nCommercial Relationships: R. Suryakumar, None; R.S. Allison, None.\n\nSupport: NSERC Canada, PREA, CIHR Strategic Training Grant\n\nAbstract\n\nPurpose:Recently, it has been shown that a transient pupil constriction occurs following presentation of a random-dot stereogram with uncrossed disparity (Li, Z and Sun, F. Exp Br Res, 2006, 168:436). We investigated the dynamic characteristics of such pupil responses and whether they were coupled with changes in ocular focus.\n\nMethods:Four subjects (mean age=$26.8\\pm 3.6$yrs) participated in the study. Stereo half images were displayed on a pair of computer monitors placed at a distance of 60 cm in a Wheatstone stereoscope arrangement. Subjects fixated the center of the random-dot stereogram which alternated between depicting a flat plane and a 0.5 cpd, 30 arc-minute peak disparity, sinusoidal corrugation in depth. In all cases, fixation remained constant at the 60cm screen distance. Accommodation and pupil responses were measured monocularly using a custom built, high-speed photorefractor at 100Hz and analyzed offline. The onset and end of the accommodation and pupil responses were identified to estimate amplitude. The pupil responses were then differentiated to estimate peak velocity.\n\nResults:A transient pupil constriction and positive accommodation were observed during both uncrossed and crossed disparity presentations (Uncrossed: $0.26\\pm 0.12$mm, $0.20\\pm 0.06$D; Crossed: $0.41\\pm 0.40$mm, $0.31\\pm 0.2$D). The peak velocity of pupil responses changed significantly as a function of amplitude (y=1.12x-0.38, $R^2=0.34$, $p<0.05$) and initial pupil diameter (y=0.28x-2.41, $R^2=0.64$, $p<0.05$). Changes in pupil size were associated with changes in accommodation. However, the ratio of pupil change to accommodation was not significantly different between crossed and uncrossed disparity (Uncrossed: $1.55\\pm 0.69$mm/D; Crossed: $1.21\\pm 0.51$mm/D; $p>0.05$).\n\nConclusions:While fixation was maintained at the plane of the screen, the finding that pupil and accommodation changes have the same sign regardless of the sign of disparity suggests the response was driven by the apparent depth in the stimulus rather than its physical distance. The strength of the coupling between accommodation and pupil responses appears to be similar for crossed and uncrossed disparity. The amplitude and velocity of pupil responses depend on initial (starting) pupil diameter confirming the non-linearity in the operating range of the pupil.\n\nKeywords: accomodation * pupil * perception\n},\n\tauthor = {Suryakumar, R. and Allison, R.S.},\n\tbooktitle = {Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting},\n\tdate-added = {2011-05-06 16:09:53 -0400},\n\tdate-modified = {2016-01-03 03:28:10 +0000},\n\tkeywords = {Stereopsis},\n\tmonth = {April 27th - May 1st},\n\torganization = {Association for Research in Vision and Ophthamology},\n\ttitle = {Accommodation and Pupil Responses to Random-dot Stereograms},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/raj arvo2008 abstract.pdf},\n\turl-2 = {http://abstracts.iovs.org/cgi/content/abstract/49/5/1792},\n\tvolume = {49},\n\tyear = {2008}}\n\n
\n
\n\n\n
\n Accommodation and Pupil Responses to Random-Dot Stereograms R. Suryakumar and R. S. Allison Center for Vision Research, Computer Science and Engineering, York University, Toronto, Ontario, Canada Commercial Relationships: R. Suryakumar, None; R.S. Allison, None. Support: NSERC Canada, PREA, CIHR Strategic Training Grant Abstract Purpose:Recently, it has been shown that a transient pupil constriction occurs following presentation of a random-dot stereogram with uncrossed disparity (Li, Z and Sun, F. Exp Br Res, 2006, 168:436). We investigated the dynamic characteristics of such pupil responses and whether they were coupled with changes in ocular focus. Methods:Four subjects (mean age=$26.8± 3.6$yrs) participated in the study. Stereo half images were displayed on a pair of computer monitors placed at a distance of 60 cm in a Wheatstone stereoscope arrangement. Subjects fixated the center of the random-dot stereogram which alternated between depicting a flat plane and a 0.5 cpd, 30 arc-minute peak disparity, sinusoidal corrugation in depth. In all cases, fixation remained constant at the 60cm screen distance. Accommodation and pupil responses were measured monocularly using a custom built, high-speed photorefractor at 100Hz and analyzed offline. The onset and end of the accommodation and pupil responses were identified to estimate amplitude. The pupil responses were then differentiated to estimate peak velocity. Results:A transient pupil constriction and positive accommodation were observed during both uncrossed and crossed disparity presentations (Uncrossed: $0.26± 0.12$mm, $0.20± 0.06$D; Crossed: $0.41± 0.40$mm, $0.31± 0.2$D). The peak velocity of pupil responses changed significantly as a function of amplitude (y=1.12x-0.38, $R^2=0.34$, $p<0.05$) and initial pupil diameter (y=0.28x-2.41, $R^2=0.64$, $p<0.05$). Changes in pupil size were associated with changes in accommodation. However, the ratio of pupil change to accommodation was not significantly different between crossed and uncrossed disparity (Uncrossed: $1.55± 0.69$mm/D; Crossed: $1.21± 0.51$mm/D; $p>0.05$). Conclusions:While fixation was maintained at the plane of the screen, the finding that pupil and accommodation changes have the same sign regardless of the sign of disparity suggests the response was driven by the apparent depth in the stimulus rather than its physical distance. The strength of the coupling between accommodation and pupil responses appears to be similar for crossed and uncrossed disparity. The amplitude and velocity of pupil responses depend on initial (starting) pupil diameter confirming the non-linearity in the operating range of the pupil. Keywords: accomodation * pupil * perception \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Landing visually with flare: Pilots do it better.\n \n \n \n\n\n \n Palmisano, S., Favell, S., Satchler, B., & Allison, R.\n\n\n \n\n\n\n In Asia-Pacific Conference on Vision. Brisbane, Australia, 07 2008.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2008tl,\n\taddress = {Brisbane, Australia},\n\tauthor = {Palmisano, S. and Favell, S. and Satchler, B. and Allison, R.},\n\tbooktitle = {Asia-Pacific Conference on Vision},\n\tdate-added = {2011-05-06 16:02:11 -0400},\n\tdate-modified = {2011-05-18 16:04:50 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {07},\n\ttitle = {Landing visually with flare: Pilots do it better},\n\tyear = {2008}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Visual Perception of smooth and perturbed self-motion.\n \n \n \n\n\n \n Allison, R., Zacher, J., & Palmisano, S.\n\n\n \n\n\n\n In CSA Life and Physical Science Workshop 2008: Scientific Advancement and Planning for Future Missions, Canadian Space Agency, Life and Physical Science Directorate, volume 20. St. Hubert, Quebec, March 3rd-5th 2008.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2008vy,\n\tabstract = {Successful adaptation to the microgravity environment of space and re-adaptation to gravity on earth requires recalibration of visual and vestibular signals. Despite decades of experimentation, motion sickness, spatial disorientation, reorientation illusions and degraded visuomotor performance continue to impact the availability and effectiveness of astronauts. We have found that incorporating jitter of the vantage point into visual displays produces more compelling illusions of self-motion (vection), despite generating greater sensory conflicts. We will discuss a series of ground-based experiments that examine a range of possible explanations for this phenomenon. Recent neuroimaging and neurophysiological data suggests that accelerating optic flow stimuli such the jittering optic flow used in our research may result in suppression of signals in vestibular cortex. Such visual modulation of vestibular signals is potentially important to understanding the initial response and adaptation to microgravity. Currently it is unclear what role gravity plays in the potentiation of vection with jittering optic flow. Ground and space based experiments will provide a unique opportunity to explore the jitter effect during periods of adaptation to altered gravity and to complement other research looking at vection on ISS. Our goals are to understand the role of gravity in jitter-enhanced vection, to develop the theory of how vestibular and visual signals are recalibrated in altered gravity and to study the time course of this adaptation. Keywords: visual, smooth, perturbed, self, motion, perception.},\n\taddress = {St. Hubert, Quebec},\n\tauthor = {Allison, R.S. and Zacher, J.E. and Palmisano, S.A.},\n\tbooktitle = {CSA Life and Physical Science Workshop 2008: Scientific Advancement and Planning for Future Missions, Canadian Space Agency, Life and Physical Science Directorate},\n\tdate-added = {2011-05-06 15:58:30 -0400},\n\tdate-modified = {2016-01-03 03:26:40 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {March 3rd-5th},\n\ttitle = {Visual Perception of smooth and perturbed self-motion},\n\tvolume = {20},\n\tyear = {2008}}\n\n
\n
\n\n\n
\n Successful adaptation to the microgravity environment of space and re-adaptation to gravity on earth requires recalibration of visual and vestibular signals. Despite decades of experimentation, motion sickness, spatial disorientation, reorientation illusions and degraded visuomotor performance continue to impact the availability and effectiveness of astronauts. We have found that incorporating jitter of the vantage point into visual displays produces more compelling illusions of self-motion (vection), despite generating greater sensory conflicts. We will discuss a series of ground-based experiments that examine a range of possible explanations for this phenomenon. Recent neuroimaging and neurophysiological data suggests that accelerating optic flow stimuli such the jittering optic flow used in our research may result in suppression of signals in vestibular cortex. Such visual modulation of vestibular signals is potentially important to understanding the initial response and adaptation to microgravity. Currently it is unclear what role gravity plays in the potentiation of vection with jittering optic flow. Ground and space based experiments will provide a unique opportunity to explore the jitter effect during periods of adaptation to altered gravity and to complement other research looking at vection on ISS. Our goals are to understand the role of gravity in jitter-enhanced vection, to develop the theory of how vestibular and visual signals are recalibrated in altered gravity and to study the time course of this adaptation. Keywords: visual, smooth, perturbed, self, motion, perception.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular depth and slant perception beyond ambient space.\n \n \n \n \n\n\n \n Gillam, B., Allison, R., & Palmisano, S.\n\n\n \n\n\n\n In Australian Journal of Psychology (EPC abstracts), volume 60 (Suppl), pages 72. Fremantle, Australia, March 28th-30th 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Gillam:2008gs,\n\taddress = {Fremantle, Australia},\n\tauthor = {Gillam, B. and Allison, R.S. and Palmisano, S.},\n\tbooktitle = {Australian Journal of Psychology (EPC abstracts)},\n\tdate-added = {2011-05-06 15:56:33 -0400},\n\tdate-modified = {2015-11-17 14:16:38 +0000},\n\tdoi = {10.1080/00049530802385541},\n\tkeywords = {Stereopsis},\n\tmonth = {March 28th-30th},\n\tpages = {72},\n\ttitle = {Binocular depth and slant perception beyond ambient space},\n\turl-1 = {http://dx.doi.org/10.1080/00049530802385541},\n\tvolume = {60 (Suppl)},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1080/00049530802385541}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Touchdown Point Perception During Simulated Aircraft Landing.\n \n \n \n \n\n\n \n Palmisano, S., & Allison, R.\n\n\n \n\n\n\n In Australian Journal of Psychology (35th Australasian experimental psychology conference, EPC abstracts), pages 92-93. Fremantle, Australia, March 28th-30th 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Visual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2008kg,\n\tabstract = {Visual touchdown point perception during simulated landing - Abstract \nThis study investigated visual touchdown point perception during simulated  fixed-wing aircraft landing approaches. Experiments examined the effects of day versus night lighting, smooth versus buffeting simulated approaches, as well as a variety of other visual scene manipulations, including the presence or\nabsence of: (i) 3-D buildings; (ii) a runway outline; (iii) a false explicit horizon; (iv) a true explicit horizon; and (v) different types of ground plane texture (random vs grid). After 4s exposure to each simulated landing approach, participants pointed to the location of their perceived touchdown point using the computer's mouse (performance feedback was provided on some trials). While our lighting, scenery and feedback manipulations significantly altered touchdown point judgments, performance was unacceptably imprecise and biased in all of the conditions tested. The findings provide further evidence that, by themselves, optic flow based perceptions of touchdown point are not sufficient for a pilot to safely land an airplane.\nThis research was supported by ARC Discovery grant DP0772398.},\n\taddress = {Fremantle, Australia},\n\tauthor = {Palmisano, S. and Allison, R.},\n\tbooktitle = {Australian Journal of Psychology (35th Australasian experimental psychology conference, EPC abstracts)},\n\tdate-added = {2011-05-06 15:53:16 -0400},\n\tdate-modified = {2011-05-22 13:37:51 -0400},\n\tdoi = {10.1080/00049530802385541},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {March 28th-30th},\n\tpages = {92-93},\n\ttitle = {Visual Touchdown Point Perception During Simulated Aircraft Landing},\n\turl-1 = {http://dx.doi.org/10.1080/00049530802385541},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1080/00049530802385541}}\n\n
\n
\n\n\n
\n Visual touchdown point perception during simulated landing - Abstract This study investigated visual touchdown point perception during simulated fixed-wing aircraft landing approaches. Experiments examined the effects of day versus night lighting, smooth versus buffeting simulated approaches, as well as a variety of other visual scene manipulations, including the presence or absence of: (i) 3-D buildings; (ii) a runway outline; (iii) a false explicit horizon; (iv) a true explicit horizon; and (v) different types of ground plane texture (random vs grid). After 4s exposure to each simulated landing approach, participants pointed to the location of their perceived touchdown point using the computer's mouse (performance feedback was provided on some trials). While our lighting, scenery and feedback manipulations significantly altered touchdown point judgments, performance was unacceptably imprecise and biased in all of the conditions tested. The findings provide further evidence that, by themselves, optic flow based perceptions of touchdown point are not sufficient for a pilot to safely land an airplane. This research was supported by ARC Discovery grant DP0772398.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of depth percepts created by binocular disparity, Panum's limiting case, and monoptic depth.\n \n \n \n \n\n\n \n Fukuda, K., Wilcox, L. M., Allison, R. S., & Howard, I. P.\n\n\n \n\n\n\n In Journal of Vision, volume 8, pages 1086-1086. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Comparison-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison20081086-1086,\n\tabstract = {Sensations of depth can be produced by diplopic images with horizontal disparity beyond the fusion limit (conventional stereopsis), a monocular image flanking a binocular image (Panum's limiting case), and an eccentric monocular image (monoptic depth, Kaye 1978; Wilcox et al. 2007). Conceivably, depth perception in Panum's limiting case could be explained by stereopsis (double-duty matching, Hering 1879), monoptic depth or another mechanism entirely. Our goal is to determine which of these options is valid. Subjects judged the magnitude of perceived depth of a target stimulus viewed for 67 ms relative to a prior fixation point. The target was (1) a monocular vertical line with variable horizontal offset relative to a midline monocular line seen by the other eye (stereoscopic), (2) a monocular line with variable offset relative to a midline binocular line (Panum's limiting case), and (3) a monocular line with variable offset relative to the prior fixation point (monoptic). For Panum's limiting case, apparent depth at first increased with increasing lateral offset of the monocular line. However, this occurred only for offsets of up to 15 and 45 arcmin on the temporal and nasal side of retina, respectively. At larger offsets, depth was similar to that perceived from monoptic targets. In contrast, perceived depth from stereopsis increased with increasing disparity of up to $1^{\\circ}$ and remained constant up to a disparity of at least $2^{\\circ}$ (stimuli became diplopic at 30 arcmin). The magnitude of perceived depth was much smaller in monoptic compared with stereoscopic conditions, at all offsets. The distinct properties of depth perceived with these three types of stimuli suggest that they have different physiological substrates, and that depth from Panum's limiting case is not simply due to stereoscopic matching.},\n\tauthor = {Fukuda, Kazuho and Wilcox, Laurie M. and Allison, Robert S. and Howard, Ian P.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:51:44 -0400},\n\tdoi = {10.1167/8.6.1086},\n\tjournal = {Journal of Vision},\n\tkeywords = {Depth perception},\n\tnumber = {6},\n\tpages = {1086-1086},\n\ttitle = {Comparison of depth percepts created by binocular disparity, Panum's limiting case, and monoptic depth},\n\turl-1 = {http://dx.doi.org/10.1167/8.6.1086},\n\tvolume = {8},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1167/8.6.1086}}\n\n
\n
\n\n\n
\n Sensations of depth can be produced by diplopic images with horizontal disparity beyond the fusion limit (conventional stereopsis), a monocular image flanking a binocular image (Panum's limiting case), and an eccentric monocular image (monoptic depth, Kaye 1978; Wilcox et al. 2007). Conceivably, depth perception in Panum's limiting case could be explained by stereopsis (double-duty matching, Hering 1879), monoptic depth or another mechanism entirely. Our goal is to determine which of these options is valid. Subjects judged the magnitude of perceived depth of a target stimulus viewed for 67 ms relative to a prior fixation point. The target was (1) a monocular vertical line with variable horizontal offset relative to a midline monocular line seen by the other eye (stereoscopic), (2) a monocular line with variable offset relative to a midline binocular line (Panum's limiting case), and (3) a monocular line with variable offset relative to the prior fixation point (monoptic). For Panum's limiting case, apparent depth at first increased with increasing lateral offset of the monocular line. However, this occurred only for offsets of up to 15 and 45 arcmin on the temporal and nasal side of retina, respectively. At larger offsets, depth was similar to that perceived from monoptic targets. In contrast, perceived depth from stereopsis increased with increasing disparity of up to $1^{∘}$ and remained constant up to a disparity of at least $2^{∘}$ (stimuli became diplopic at 30 arcmin). The magnitude of perceived depth was much smaller in monoptic compared with stereoscopic conditions, at all offsets. The distinct properties of depth perceived with these three types of stimuli suggest that they have different physiological substrates, and that depth from Panum's limiting case is not simply due to stereoscopic matching.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gain of cyclovergence as a function of stimulus location.\n \n \n \n \n\n\n \n Daniels, N. T., Howard, I. P., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision, volume 8, pages 646-646. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Gain-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2008646-646,\n\tabstract = {Earlier work from this laboratory established that cylovergence is induced more effectively by vertical shear disparity than by horizontal shear disparity in a large textured surface. We predicted that vertical shear disparity confined to stimuli along the horizontal meridian would evoke more cyclovergence than stimuli confined to the periphery. That is, shear disparity in the periphery can arise from surface inclination, while disparity along the central meridian arises only from torsional misalignment of the eyes. Binocular dichoptic stimuli were rotated in counterphase through $5^{\\circ}$ peak-to-peak disparity at 0.1 Hz and presented in a mirror stereoscope. The stimuli were $70^{\\circ}$ long randomly spaced lines that (1) filled a $70^{\\circ}$ diameter circle, (2) were confined to a horizontal band $7^{\\circ}$ wide, (3) filled the $70^{\\circ}$ circle but with the central horizontal band blank. We used scleral search coils to measure cyclovergence of three subjects as they fixated at the center of planar stimuli. As predicted, the mean gain of cyclovergence was significantly higher (0.23) for the central band than for the display with the central band blank (0.12). However, the gain for the full $70^{\\circ}$ display was higher (0.36) than that for the central band. We conclude that stimuli along the central horizontal meridian provide a stronger stimulus for cyclovergence than do stimuli outside the central meridian. However, increasing the total area of the stimulus also increases the gain of cyclovergence.<i>Acknowledgements: Supported by grants from the National Science and Engineering Council of Canada and the Canadian Institutes of Health Research.</i>},\n\tauthor = {Daniels, Nicole T. and Howard, Ian P. and Allison, Robert S.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:51:09 -0400},\n\tdoi = {10.1167/8.6.646},\n\tjournal = {Journal of Vision},\n\tkeywords = {Vergence},\n\tnumber = {6},\n\tpages = {646-646},\n\ttitle = {Gain of cyclovergence as a function of stimulus location},\n\turl-1 = {http://dx.doi.org/10.1167/8.6.646},\n\tvolume = {8},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1167/8.6.646}}\n\n
\n
\n\n\n
\n Earlier work from this laboratory established that cylovergence is induced more effectively by vertical shear disparity than by horizontal shear disparity in a large textured surface. We predicted that vertical shear disparity confined to stimuli along the horizontal meridian would evoke more cyclovergence than stimuli confined to the periphery. That is, shear disparity in the periphery can arise from surface inclination, while disparity along the central meridian arises only from torsional misalignment of the eyes. Binocular dichoptic stimuli were rotated in counterphase through $5^{∘}$ peak-to-peak disparity at 0.1 Hz and presented in a mirror stereoscope. The stimuli were $70^{∘}$ long randomly spaced lines that (1) filled a $70^{∘}$ diameter circle, (2) were confined to a horizontal band $7^{∘}$ wide, (3) filled the $70^{∘}$ circle but with the central horizontal band blank. We used scleral search coils to measure cyclovergence of three subjects as they fixated at the center of planar stimuli. As predicted, the mean gain of cyclovergence was significantly higher (0.23) for the central band than for the display with the central band blank (0.12). However, the gain for the full $70^{∘}$ display was higher (0.36) than that for the central band. We conclude that stimuli along the central horizontal meridian provide a stronger stimulus for cyclovergence than do stimuli outside the central meridian. However, increasing the total area of the stimulus also increases the gain of cyclovergence.Acknowledgements: Supported by grants from the National Science and Engineering Council of Canada and the Canadian Institutes of Health Research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular slant discrimination beyond interaction space.\n \n \n \n \n\n\n \n Allison, R. S., Gillam, B. J., & Palmisano, S. A.\n\n\n \n\n\n\n In Journal of Vision, volume 8, pages 536-536. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2008536-536,\n\tabstract = {Effective locomotion depends on judgements of the support, passability and effort to traverse provided by terrain several metres away. Elementary texts commonly assert that stereopsis per se is ineffective in these judgements beyond modest distances. He et al. (Perception, 2004, 33: 789) proposed that vergence and stereopsis calibrate and anchor depth percepts in near space that are then extended to larger distances by integrating monocular cues over the continuous ground plane. However, stereopsis has a much larger theoretical range and we have shown binocular performance improvements to at least 18.0m (VSS2007). Here we evaluate the contribution of binocular vision to judgements of ground surface properties. A computer-controlled constellation of LEDs was distributed throughout a volume of space centred 4.5 or 9.0 metres from the subject. LEDs could be selectively lit to create a single ground plane or two planes either adjacent or interleaved (simulating uneven terrain). In separate 2AFC experiments subjects discriminated: 1) the absolute slant of a single plane; 2) the relative slant between two adjacent planes; or 3) whether all the lights lay in a single plane or not (surface smoothness). Viewing was binocular or monocular. Binocular discrimination of absolute and relative slant showed less bias and was more precise than monocular discrimination for all tasks at both distances. Judgements of surface smoothness were very difficult monocularly compared to binocularly, as reflected in substantial differences in sensitivity (d'). Binocular vision is useful for judgements of the layout and regularity of terrain to at least 9.0 metres (an important range for moment-to-moment path planning during walking, running and assisted travel). In sum, binocular vision can contribute to precise judgements of ground surface properties. This contribution is not simply limited to calibration and anchoring of monocular cues in personal space.},\n\tauthor = {Allison, Robert S. and Gillam, Barbara J. and Palmisano, Stephen A.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:50:42 -0400},\n\tdoi = {10.1167/8.6.536},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {6},\n\tpages = {536-536},\n\ttitle = {Binocular slant discrimination beyond interaction space},\n\turl-1 = {http://dx.doi.org/10.1167/8.6.536},\n\tvolume = {8},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1167/8.6.536}}\n\n
\n
\n\n\n
\n Effective locomotion depends on judgements of the support, passability and effort to traverse provided by terrain several metres away. Elementary texts commonly assert that stereopsis per se is ineffective in these judgements beyond modest distances. He et al. (Perception, 2004, 33: 789) proposed that vergence and stereopsis calibrate and anchor depth percepts in near space that are then extended to larger distances by integrating monocular cues over the continuous ground plane. However, stereopsis has a much larger theoretical range and we have shown binocular performance improvements to at least 18.0m (VSS2007). Here we evaluate the contribution of binocular vision to judgements of ground surface properties. A computer-controlled constellation of LEDs was distributed throughout a volume of space centred 4.5 or 9.0 metres from the subject. LEDs could be selectively lit to create a single ground plane or two planes either adjacent or interleaved (simulating uneven terrain). In separate 2AFC experiments subjects discriminated: 1) the absolute slant of a single plane; 2) the relative slant between two adjacent planes; or 3) whether all the lights lay in a single plane or not (surface smoothness). Viewing was binocular or monocular. Binocular discrimination of absolute and relative slant showed less bias and was more precise than monocular discrimination for all tasks at both distances. Judgements of surface smoothness were very difficult monocularly compared to binocularly, as reflected in substantial differences in sensitivity (d'). Binocular vision is useful for judgements of the layout and regularity of terrain to at least 9.0 metres (an important range for moment-to-moment path planning during walking, running and assisted travel). In sum, binocular vision can contribute to precise judgements of ground surface properties. This contribution is not simply limited to calibration and anchoring of monocular cues in personal space.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Real-Time Simulation of Visual Defects with Gaze-Contingent Display.\n \n \n \n \n\n\n \n Vinnikov, M., Allison, R., & Swierad, D.\n\n\n \n\n\n\n In Spencer, S. N., editor(s), Proceedings of the Eye Tracking Research and Applications Symposium, pages 127-130, New York, 2008. Assoc Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"Real-Time-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2008127-130,\n\tabstract = {Effective management and treatment of glaucoma and other visual diseases depend on early diganosis. However, early symptoms of glaucoma often go unnoticed until if significant portion of the visual field is lost, The ability to simulate the visual Consequences of the disease offers potential benefits for patients and clinical education as well as for Public awareness of its signs and symptoms. Experiments using simulated visual field could identify changes in behaviour, for example during driving, that one uses 10 compensate at the early stages of the disease's development. Furthermore, by understanding how visual field defects affect performance of visual tasks, we can help develop new strategies to cope with other devastating diseases macular degeneration. A Gaze-Contingent Display (GCD) system Was developed to simulate an arbitrary visual held in a virtual environment. The system can estimate real-time gaze direction and eye position in earth-fixed coordinates during relatively large head movement. and thus it call be used in immersive projection based VE systems like the CAVE (TM). Arbitrary visual fields are simulated via OpenGL and Shading Language capabilities and techniques that are supported by the GPU, thus enabling fast performance in real time. In order to simulate realistic visual defects, the system performs multiple image processing operations including change in acuity, brightness, color, glare and image distortion. The final component of the system simulated different virtual scenes that the participant call navigate through and explore. As a result, this system creates an experimental environment to study the effects of low vision on everyday tasks such its driving and navigation.},\n\taddress = {New York},\n\tauthor = {Vinnikov, M. and Allison, R.S. and Swierad, D.},\n\tbooktitle = {Proceedings of the Eye Tracking Research and Applications Symposium},\n\tdate-modified = {2012-07-02 22:36:37 -0400},\n\tdoi = {10.1145/1344471.1344504},\n\teditor = {Spencer, S. N.},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {127-130},\n\tpublisher = {Assoc Computing Machinery},\n\ttitle = {Real-Time Simulation of Visual Defects with Gaze-Contingent Display},\n\turl-1 = {http://dx.doi.org/10.1145/1344471.1344504},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1145/1344471.1344504}}\n\n
\n
\n\n\n
\n Effective management and treatment of glaucoma and other visual diseases depend on early diganosis. However, early symptoms of glaucoma often go unnoticed until if significant portion of the visual field is lost, The ability to simulate the visual Consequences of the disease offers potential benefits for patients and clinical education as well as for Public awareness of its signs and symptoms. Experiments using simulated visual field could identify changes in behaviour, for example during driving, that one uses 10 compensate at the early stages of the disease's development. Furthermore, by understanding how visual field defects affect performance of visual tasks, we can help develop new strategies to cope with other devastating diseases macular degeneration. A Gaze-Contingent Display (GCD) system Was developed to simulate an arbitrary visual held in a virtual environment. The system can estimate real-time gaze direction and eye position in earth-fixed coordinates during relatively large head movement. and thus it call be used in immersive projection based VE systems like the CAVE (TM). Arbitrary visual fields are simulated via OpenGL and Shading Language capabilities and techniques that are supported by the GPU, thus enabling fast performance in real time. In order to simulate realistic visual defects, the system performs multiple image processing operations including change in acuity, brightness, color, glare and image distortion. The final component of the system simulated different virtual scenes that the participant call navigate through and explore. As a result, this system creates an experimental environment to study the effects of low vision on everyday tasks such its driving and navigation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computer Gaming for Vision Therapy.\n \n \n \n \n\n\n \n Carvelho, T., Allison, R., Irving, E. L., & Herriot, C.\n\n\n \n\n\n\n In 2008 Virtual Rehabilitation, pages 198-204, Vancouver, Canada, 2008. IEEE, New York\n \n\n\n\n
\n\n\n\n \n \n \"Computer-1\n  \n \n \n \"Computer-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2008198-204,\n\tabstract = {Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.},\n\taddress = {Vancouver, Canada},\n\tauthor = {Carvelho, T. and Allison, R.S. and Irving, E. L. and Herriot, C.},\n\tbooktitle = {2008 Virtual Rehabilitation},\n\tdate-modified = {2012-07-02 22:39:29 -0400},\n\tdoi = {10.1109/ICVR.2008.4625160},\n\tkeywords = {Gaming for Vision Therapy},\n\tpages = {198-204},\n\tpublisher = {IEEE, New York},\n\ttitle = {Computer Gaming for Vision Therapy},\n\turl-1 = {http://dx.doi.org/10.1109/ICVR.2008.4625160},\n\turl-2 = {http://dx.doi.org/10.1109/ICVR.2008.4625160},\n\tyear = {2008},\n\turl-1 = {https://doi.org/10.1109/ICVR.2008.4625160}}\n\n
\n
\n\n\n
\n Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2007\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Analysis of the influence of vertical disparities arising in toed-in stereoscopic cameras.\n \n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n Journal of Imaging Science and Technology, 51(4): 317-327. 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Analysis-1\n  \n \n \n \"Analysis-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2007317-327,\n\tabstract = {A basic task in the construction and use of a stereoscopic camera and display system is the alignment of the left and right images appropriately-a task generally referred to as camera convergence. Convergence of the real or virtual stereoscopic cameras can shift the range of portrayed depth to improve visual comfort, can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict, or can bring objects of interest into the binocular field of view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or {`}toe-in{'} distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviorally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. I ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images. 2007 Society for Imaging Science and Technology.},\n\tauthor = {Allison, R.S.},\n\tdate-modified = {2012-07-02 19:04:54 -0400},\n\tdoi = {0.2352/J.ImagingSci.Technol.(2007)51:4(317)},\n\tjournal = {Journal of Imaging Science and Technology},\n\tkeywords = {Stereopsis},\n\tnumber = {4},\n\tpages = {317-327},\n\ttitle = {Analysis of the influence of vertical disparities arising in toed-in stereoscopic cameras},\n\turl-1 = {http://dx.doi.org/0.2352/J.ImagingSci.Technol.(2007)51:4(317)},\n\turl-2 = {http://dx.doi.org/0.2352/J.ImagingSci.Technol.(2007)51:4(317)},\n\tvolume = {51},\n\tyear = {2007},\n\turl-1 = {https://doi.org/0.2352/J.ImagingSci.Technol.(2007)51:4(317)}}\n\n
\n
\n\n\n
\n A basic task in the construction and use of a stereoscopic camera and display system is the alignment of the left and right images appropriately-a task generally referred to as camera convergence. Convergence of the real or virtual stereoscopic cameras can shift the range of portrayed depth to improve visual comfort, can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict, or can bring objects of interest into the binocular field of view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or `toe-in' distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviorally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. I ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images. 2007 Society for Imaging Science and Technology.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (14)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Simulation of NVG-aided flight over terrain.\n \n \n \n\n\n \n Vinnikov, M., & Allison, R.\n\n\n \n\n\n\n In OCE Discovery 2007. Toronto, Canada, 05 2007.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:2007fd,\n\taddress = {Toronto, Canada},\n\tauthor = {Vinnikov, M. and Allison, R.S.},\n\tbooktitle = {OCE Discovery 2007},\n\tdate-added = {2011-05-09 11:19:31 -0400},\n\tdate-modified = {2011-05-11 11:59:49 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {05},\n\ttitle = {Simulation of NVG-aided flight over terrain},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Assessing Image Intensifier Integration in Emergency and Security Response.\n \n \n \n \n\n\n \n Guterman, P., & Allison, R.\n\n\n \n\n\n\n In OCE Discovery 2007. Toronto, Canada, 05 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Assessing-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman:2007fb,\n\taddress = {Toronto, Canada},\n\tauthor = {Guterman, P. and Allison, R.S.},\n\tbooktitle = {OCE Discovery 2007},\n\tdate-added = {2011-05-09 11:17:59 -0400},\n\tdate-modified = {2011-05-11 11:59:49 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {05},\n\ttitle = {Assessing Image Intensifier Integration in Emergency and Security Response},\n\turl-1 = {http://ocediscovery.com/video/PearlGuterman/Pearl_Guterman-OCE_Video_Contest.ppt},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Visual Control of Locomotion as a Function of the Richness of the Visual Environment.\n \n \n \n\n\n \n Guterman, P., Allison, R., & Rushton, S.\n\n\n \n\n\n\n In CVR Conference 2007: Cortical Mechanisms of Vision. Toronto, Canada, 06 2007.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Guterman-P.:2007kq,\n\taddress = {Toronto, Canada},\n\tauthor = {Guterman, P.S. and Allison, R.S. and Rushton, S.K.},\n\tbooktitle = {CVR Conference 2007: Cortical Mechanisms of Vision},\n\tdate-added = {2011-05-09 11:16:14 -0400},\n\tdate-modified = {2012-09-24 14:46:23 +0000},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {06},\n\torganization = {Centre for Vision Research, York University},\n\ttitle = {The Visual Control of Locomotion as a Function of the Richness of the Visual Environment},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Real time simulation of impaired vision in naturalistic settings with gaze-contingent display.\n \n \n \n\n\n \n Vinnikov, M., Allison, R., & Huang, H\n\n\n \n\n\n\n In UW-IEEE joint Symposium on Biomedical Imaging and Computer Vision (BICV 2007). Waterloo, Canada, 09 2007.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Vinnikov:2007qb,\n\tabstract = {Effective management and treatment of glaucoma and other visual diseases depend on early diagnosis. However, early symptoms of many sever visual disease often go unnoticed until a significant portion of the visual field is lost. The ability to simulate the visual consequences of the disease offers potential benefits for patients and clinical education as well as for public awareness of its signs and symptoms. Experiments using simulated visual field defects could identify changes in behaviour, for example during driving, that one uses to compensate at the early stages of the disease's development. Furthermore, by understanding how visual field defects affect performance in visual tasks, we can help develop new strategies to cope with other devastating diseases such as macular degeneration. A Gaze-Contingent Display (GCD) system was developed to simulate an arbitrary visual field in a virtual environment. The system can estimate real-time gaze direction and eye position in earth-fixed coordinates during relatively large head movement, and thus it can be used in immersive projection based VE systems like the CAVETM . Arbitrary visual fields are simulated via OpenGL and Shading Language capabilities and techniques that are supported by the GPU, thus enabling fast performance in real time. In order to simulate realistic visual defects, the system performs multiple image processing operations including change in acuity, brightness, color, glare and image distortion. The final component of the system simulates different virtual scenes that the participant can navigate through and explore. As a result, this system creates an experimental environment that could be useful for studying the effects of low vision on everyday tasks such as driving and navigation.},\n\taddress = {Waterloo, Canada},\n\tauthor = {Vinnikov, M. and Allison, R.S. and Huang, H},\n\tbooktitle = {UW-IEEE joint Symposium on Biomedical Imaging and Computer Vision (BICV 2007)},\n\tdate-added = {2011-05-09 11:14:43 -0400},\n\tdate-modified = {2011-05-18 16:11:47 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {09},\n\ttitle = {Real time simulation of impaired vision in naturalistic settings with gaze-contingent display},\n\tyear = {2007}}\n\n
\n
\n\n\n
\n Effective management and treatment of glaucoma and other visual diseases depend on early diagnosis. However, early symptoms of many sever visual disease often go unnoticed until a significant portion of the visual field is lost. The ability to simulate the visual consequences of the disease offers potential benefits for patients and clinical education as well as for public awareness of its signs and symptoms. Experiments using simulated visual field defects could identify changes in behaviour, for example during driving, that one uses to compensate at the early stages of the disease's development. Furthermore, by understanding how visual field defects affect performance in visual tasks, we can help develop new strategies to cope with other devastating diseases such as macular degeneration. A Gaze-Contingent Display (GCD) system was developed to simulate an arbitrary visual field in a virtual environment. The system can estimate real-time gaze direction and eye position in earth-fixed coordinates during relatively large head movement, and thus it can be used in immersive projection based VE systems like the CAVETM . Arbitrary visual fields are simulated via OpenGL and Shading Language capabilities and techniques that are supported by the GPU, thus enabling fast performance in real time. In order to simulate realistic visual defects, the system performs multiple image processing operations including change in acuity, brightness, color, glare and image distortion. The final component of the system simulates different virtual scenes that the participant can navigate through and explore. As a result, this system creates an experimental environment that could be useful for studying the effects of low vision on everyday tasks such as driving and navigation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Importance of Flight Test and Evaluation in the Development of Airborne Technologies for Border Enforcement: Assessing Night Vision Goggle Performance in Security Applications.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n In CRSS/ASPRS 2007 Fall Conference, Our Common borders - Safety, Security and the Environment Through Remote Sensing. Ottawa, Canada, October 28th- November 1st 2007.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2007vp,\n\taddress = {Ottawa, Canada},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {CRSS/ASPRS 2007 Fall Conference, Our Common borders - Safety, Security and the Environment Through Remote Sensing},\n\tdate-added = {2011-05-09 11:12:18 -0400},\n\tdate-modified = {2011-05-11 11:59:49 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {October 28th- November 1st},\n\ttitle = {The Importance of Flight Test and Evaluation in the Development of Airborne Technologies for Border Enforcement: Assessing Night Vision Goggle Performance in Security Applications},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Adaptation to apparent motion-in-depth based on binocular cues.\n \n \n \n\n\n \n Sakano, Y., & Allison, R.\n\n\n \n\n\n\n In The Journal of the Vision Society of Japan (Winter Annual Meeting 2007 of The Vision Society of Japan (VSJ)), volume 58. Tokyo, Japan, Jan 31-Feb 2 2007.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sakano:2007hc,\n\tabstract = {We investigated whether motion aftereffect (MAE) in depth can be induced by adaptation to apparent motion-in-depth based on binocular cues: changing disparity and interocular velocity differences. The adaptation stimulus was a random-dot stereogram (RDS) in which the absolute disparity alternated every frame between two values while the dot distribution changed randomly every second frame. Thus, the stimulus contained a coherent interocular velocity difference signal to adapt motion-in-depth and a balanced changing disparity signal that should not induce a MAE in depth. The test stimulus was a stationary random-dot stereogram depicting a fronto-parallel plane. MAE in depth occurred in the opposite direction to the coherent interocular velocity difference based adapting stimulus, supporting the idea that adaptation to interocular velocity differences produces MAE in depth.},\n\taddress = {Tokyo, Japan},\n\tauthor = {Sakano, Y. and Allison, R.S.},\n\tbooktitle = {The Journal of the Vision Society of Japan (Winter Annual Meeting 2007 of The Vision Society of Japan (VSJ))},\n\tdate-added = {2011-05-06 16:39:56 -0400},\n\tdate-modified = {2013-12-28 14:59:48 +0000},\n\tkeywords = {Motion in depth},\n\tmonth = {Jan 31-Feb 2},\n\torganization = {Vision Society of Japan},\n\ttitle = {Adaptation to apparent motion-in-depth based on binocular cues},\n\tvolume = {58},\n\tyear = {2007}}\n\n
\n
\n\n\n
\n We investigated whether motion aftereffect (MAE) in depth can be induced by adaptation to apparent motion-in-depth based on binocular cues: changing disparity and interocular velocity differences. The adaptation stimulus was a random-dot stereogram (RDS) in which the absolute disparity alternated every frame between two values while the dot distribution changed randomly every second frame. Thus, the stimulus contained a coherent interocular velocity difference signal to adapt motion-in-depth and a balanced changing disparity signal that should not induce a MAE in depth. The test stimulus was a stationary random-dot stereogram depicting a fronto-parallel plane. MAE in depth occurred in the opposite direction to the coherent interocular velocity difference based adapting stimulus, supporting the idea that adaptation to interocular velocity differences produces MAE in depth.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Assessing Night Vision Goggle Performance in Security Applications.\n \n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n In CASI 54th Aeronautics Conference, pages 64-65. Toronto, Canada, 04 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Assessing-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2007rp,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {CASI 54th Aeronautics Conference},\n\tdate-added = {2011-05-06 16:34:07 -0400},\n\tdate-modified = {2011-05-11 11:59:49 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {04},\n\tpages = {64-65},\n\ttitle = {Assessing Night Vision Goggle Performance in Security Applications},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/casi paper.pdf},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computerized Gaming Technology as an Effective Form of Vision Therapy for Convergence Insufficiency.\n \n \n \n \n\n\n \n Herriot, C., Carvelho, T., Allison, R., & Irving, E.\n\n\n \n\n\n\n In Sixth Canadian Optometry Conference on Vision Science. Waterloo, Canada, Dec 7th-9th 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Computerized-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Herriot:2007gd,\n\tabstract = {Computerized Gaming Technology as an Effective Form of Vision Therapy for Convergence Insufficiency\nAuthors\nChristopher Herriot, School of Optometry, University of Waterloo\nTristan Carvelho, Computer Science and Engineering, York University\nRobert Allison, Computer Science and Engineering, York University\nElizabeth Irving, School of Optometry, University of Waterloo\n\nPurpose\nThe prevalence of convergence insufficiency (CI) has been estimated between 2.2 and 13\\%. CI is most often treated with some form of eye exercises. However, traditional forms of the exercises are tedious and boring, leading to poor patient compliance. The purpose of this pilot study is to investigate the efficacy and user acceptance of computerized gaming as a form of treatment for convergence insufficiency. \n\nMethods\nFour participants were selected for the study, with ages ranging from 11 to 34 years. Participants had to have corrected visual acuity of 6/12, stereo-acuity threshold \\leq 40 seconds of arc, near point of convergence (NPC) \\geq 6cm, exophoria greater at near than at far, and a positive fusional vergence (PFV) \\leq 15\\Delta or not meeting Sheard's criterion. Participants were asked to play a revised version of Pac-Man using a stereoscope to fuse two separate images. As the participant's convergence improved, the convergence demand was increased and operant conditioning paradigms were used to keep the participant motivated. Training was performed for 20 minutes each day, 5 days of the week for 2 weeks. Participants were asked to complete as visual symptom questionnaire prior to their training and an acceptance questionnaire after completion.\n\nResults\nPrior to training the average NPC and PFV values were 12.9 +/- 7.6 cm and 13 +/- 4.2\\Delta respectively, and two participants reported visual symptoms of CI. Upon completion of the training, the average NPC decreased to 5.8 +/- 3.0 cm and the average PFV increased to 25 +/- 4.1\\Delta. The two symptomatic participants reported relief of their symptoms, and all participants reported that computerized vision therapy was easy to understand, entertaining, and motivating. \n\nConclusion\nComputerized gaming is a promising method to improve patient motivation and compliance. Further testing is required to directly compare its effectiveness to traditional methods.\n},\n\taddress = {Waterloo, Canada},\n\tauthor = {Herriot, C. and Carvelho, T. and Allison, R.S. and Irving, E.L.},\n\tbooktitle = {Sixth Canadian Optometry Conference on Vision Science},\n\tdate-added = {2011-05-06 16:30:31 -0400},\n\tdate-modified = {2012-07-02 22:39:14 -0400},\n\tkeywords = {Gaming for Vision Therapy},\n\tmonth = {Dec 7th-9th},\n\ttitle = {Computerized Gaming Technology as an Effective Form of Vision Therapy for Convergence Insufficiency},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/COCVS abstract Herriot.doc},\n\tyear = {2007}}\n\n
\n
\n\n\n
\n Computerized Gaming Technology as an Effective Form of Vision Therapy for Convergence Insufficiency Authors Christopher Herriot, School of Optometry, University of Waterloo Tristan Carvelho, Computer Science and Engineering, York University Robert Allison, Computer Science and Engineering, York University Elizabeth Irving, School of Optometry, University of Waterloo Purpose The prevalence of convergence insufficiency (CI) has been estimated between 2.2 and 13%. CI is most often treated with some form of eye exercises. However, traditional forms of the exercises are tedious and boring, leading to poor patient compliance. The purpose of this pilot study is to investigate the efficacy and user acceptance of computerized gaming as a form of treatment for convergence insufficiency. Methods Four participants were selected for the study, with ages ranging from 11 to 34 years. Participants had to have corrected visual acuity of 6/12, stereo-acuity threshold ≤ 40 seconds of arc, near point of convergence (NPC) ≥ 6cm, exophoria greater at near than at far, and a positive fusional vergence (PFV) ≤ 15Δ or not meeting Sheard's criterion. Participants were asked to play a revised version of Pac-Man using a stereoscope to fuse two separate images. As the participant's convergence improved, the convergence demand was increased and operant conditioning paradigms were used to keep the participant motivated. Training was performed for 20 minutes each day, 5 days of the week for 2 weeks. Participants were asked to complete as visual symptom questionnaire prior to their training and an acceptance questionnaire after completion. Results Prior to training the average NPC and PFV values were 12.9 +/- 7.6 cm and 13 +/- 4.2Δ respectively, and two participants reported visual symptoms of CI. Upon completion of the training, the average NPC decreased to 5.8 +/- 3.0 cm and the average PFV increased to 25 +/- 4.1Δ. The two symptomatic participants reported relief of their symptoms, and all participants reported that computerized vision therapy was easy to understand, entertaining, and motivating. Conclusion Computerized gaming is a promising method to improve patient motivation and compliance. Further testing is required to directly compare its effectiveness to traditional methods. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Experimental validation of an NVD parametric model.\n \n \n \n \n\n\n \n Thomas, P., Jenning, S., Macuda, M., Allison, R., & Hornsey, R.\n\n\n \n\n\n\n In Advanced Deployable Day/Night Simulation (ADDNS) Symposium, pages 43-45. Toronto, Canada, November 13th-14th 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Experimental-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Thomas:2007tn,\n\taddress = {Toronto, Canada},\n\tauthor = {Thomas, P.J. and Jenning, S. and Macuda, M. and Allison, R.S. and Hornsey, R.},\n\tbooktitle = {Advanced Deployable Day/Night Simulation (ADDNS) Symposium},\n\tdate-added = {2011-05-06 16:19:43 -0400},\n\tdate-modified = {2011-05-18 15:56:41 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {November 13th-14th},\n\tpages = {43-45},\n\ttitle = {Experimental validation of an NVD parametric model},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Allison- Experimental validation of an NVD Parametric model.pdf},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Psychophysics of night vision device halo.\n \n \n \n \n\n\n \n Allison, R., Brandwood, T., Vinnikov, M., Zacher, J., Jenning, S., Macuda, M., Thomas, P., & Palmisano, S.\n\n\n \n\n\n\n In Advanced Deployable Day/Night Simulation (ADDNS) Symposium, pages 27-29. Toronto, Canada, November 13th-14th 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Psychophysics-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2007hc,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S. and Brandwood, T. and Vinnikov, M. and Zacher, J.E. and Jenning, S. and Macuda, M. and Thomas, P.J. and Palmisano, S.A.},\n\tbooktitle = {Advanced Deployable Day/Night Simulation (ADDNS) Symposium},\n\tdate-added = {2011-05-06 16:17:47 -0400},\n\tdate-modified = {2011-05-18 16:11:30 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {November 13th-14th},\n\tpages = {27-29},\n\ttitle = {Psychophysics of night vision device halo},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Allison- Psychophysics of Night Vision Device Halo.pdf},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of scene transitions on trans-saccadic change detection in natural scenes.\n \n \n \n \n\n\n \n Sadr, S., Allison, R., & Vinnikov, M.\n\n\n \n\n\n\n In Perception, volume 36, pages 30-30. 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n \n \"Effect-2\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison200730-30,\n\tauthor = {Sadr, S. and Allison, R.S. and Vinnikov, M.},\n\tbooktitle = {Perception},\n\tdate-modified = {2011-09-12 21:59:21 -0400},\n\tjournal = {Perception},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {30-30},\n\ttitle = {Effect of scene transitions on trans-saccadic change detection in natural scenes},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Effects of scene transition.pdf},\n\turl-2 = {http://www.perceptionweb.com/abstract.cgi?id=v070786},\n\tvolume = {36},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The visual control of walking: do we go with the (optic) flow?.\n \n \n \n \n\n\n \n Guterman, P. S., Allison, R. S., & Rushton, S. K.\n\n\n \n\n\n\n In Journal of Vision, volume 7, pages 1017-1017. 2007.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison20071017-1017,\n\tabstract = {What visual information guides locomotion? Optic flow, the global pattern of motion at the vantage point of the eye, specifies the direction of self-motion, and could be used to control walking. Alternatively, we could walk in the perceived direction of a target. Recent evidence suggests that the type of visual environment can influence steering behaviour. However, controversy remains as to whether this demonstrates direct, online use of flow or indirect influence on context and recalibration of direction. The current literature is complicated by methodological as well as theoretical differences between prism-based and head mounted display based studies. Both techniques have well-known limitations that have complicated comparisons across studies. Here we tested undergraduate students (n = 6) using an immersive virtual environment, where the heading specified by flow was displaced by $0^{\\circ}$, $\\pm 5^{\\circ}$ and $\\pm 10^{\\circ}$ from the direction of the target through the virtual environment or prism displacement. Observers walked (stepped in-place) to a target in five virtual environments, which consisted of a plain gray or textured ground; blue sky; and zero, one, ten, or twenty objects in it. The distance to the target from the start position was 20 m, nearly double that of comparable studies. For all displacement conditions, observers walked in the perceived direction of the target, and there was no significant main effect of the environment. The findings suggest that egocentric direction is used to guide locomotion on foot, regardless of more or less objects that enhance flow in the environment.},\n\tauthor = {Guterman, Pearl S. and Allison, Robert S. and Rushton, Simon K.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:52:19 -0400},\n\tdoi = {10.1167/7.9.1017},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {9},\n\tpages = {1017-1017},\n\ttitle = {The visual control of walking: do we go with the (optic) flow?},\n\turl-1 = {http://dx.doi.org/10.1167/7.9.1017},\n\tvolume = {7},\n\tyear = {2007},\n\turl-1 = {https://doi.org/10.1167/7.9.1017}}\n\n
\n
\n\n\n
\n What visual information guides locomotion? Optic flow, the global pattern of motion at the vantage point of the eye, specifies the direction of self-motion, and could be used to control walking. Alternatively, we could walk in the perceived direction of a target. Recent evidence suggests that the type of visual environment can influence steering behaviour. However, controversy remains as to whether this demonstrates direct, online use of flow or indirect influence on context and recalibration of direction. The current literature is complicated by methodological as well as theoretical differences between prism-based and head mounted display based studies. Both techniques have well-known limitations that have complicated comparisons across studies. Here we tested undergraduate students (n = 6) using an immersive virtual environment, where the heading specified by flow was displaced by $0^{∘}$, $± 5^{∘}$ and $± 10^{∘}$ from the direction of the target through the virtual environment or prism displacement. Observers walked (stepped in-place) to a target in five virtual environments, which consisted of a plain gray or textured ground; blue sky; and zero, one, ten, or twenty objects in it. The distance to the target from the start position was 20 m, nearly double that of comparable studies. For all displacement conditions, observers walked in the perceived direction of the target, and there was no significant main effect of the environment. The findings suggest that egocentric direction is used to guide locomotion on foot, regardless of more or less objects that enhance flow in the environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Horizontal-disparity processing in the presence of vertical misalignment: The role of monoptic depth.\n \n \n \n \n\n\n \n Fukuda, K., Wilcox, L. M., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In Perception, volume 36, pages 203-203. 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Horizontal-disparity-1\n  \n \n \n \"Horizontal-disparity-2\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2007203-203,\n\tabstract = {Horizontal-disparity processing in the presence of vertical misalignment: The role of monoptic depth\n\nK Fukuda, L M Wilcox, R S Allison, I P Howard\n\nDepth perception from stereopsis is thought to be resilient to vertical misalignments of up to $4^{\\circ}$ (Ogle, 1955 Archives of Ophthalmology 53 495 ff; Mitchell, 1970 Vision Research 10 145 - 162). We have replicated these results, and assessed the assumption that horizontal disparity is responsible for depth in such stimuli. A horizontal line, which extended the width of the display, was inserted between the vertically misaligned horizontally disparate targets. Surprisingly, this had no effect on depth-discrimination performance. We repeated the study with only one half-image (a monoptic target) and the central line. Depth discrimination was above chance for all observers, suggesting that previous results were not due to horizontal disparity, but to the retinal position of the stimuli (Kaye, 1978 Vision Research 18 1013 - 1022; Wilcox et al, 2007 Vision Research 47 in press). Tolerance to vertical misalignment has been used as evidence against an epipolar constraint in human stereopsis; the presence of monoptic depth cues in such stimuli suggests that the issue is unresolved.\n[Supported by Natural Sciences and Engineering Research Council of Canada and CIHR Training Grant in Vision Health Research.] },\n\tauthor = {Fukuda, K. and Wilcox, L. M. and Allison, R.S. and Howard, I. P.},\n\tbooktitle = {Perception},\n\tdate-modified = {2011-09-12 22:04:42 -0400},\n\tjournal = {Perception},\n\tkeywords = {Depth perception},\n\tpages = {203-203},\n\ttitle = {Horizontal-disparity processing in the presence of vertical misalignment: The role of monoptic depth},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/fukuda ECVP 2007.doc},\n\turl-2 = {http://www.perceptionweb.com/abstract.cgi?id=v070330},\n\tvolume = {36},\n\tyear = {2007}}\n\n
\n
\n\n\n
\n Horizontal-disparity processing in the presence of vertical misalignment: The role of monoptic depth K Fukuda, L M Wilcox, R S Allison, I P Howard Depth perception from stereopsis is thought to be resilient to vertical misalignments of up to $4^{∘}$ (Ogle, 1955 Archives of Ophthalmology 53 495 ff; Mitchell, 1970 Vision Research 10 145 - 162). We have replicated these results, and assessed the assumption that horizontal disparity is responsible for depth in such stimuli. A horizontal line, which extended the width of the display, was inserted between the vertically misaligned horizontally disparate targets. Surprisingly, this had no effect on depth-discrimination performance. We repeated the study with only one half-image (a monoptic target) and the central line. Depth discrimination was above chance for all observers, suggesting that previous results were not due to horizontal disparity, but to the retinal position of the stimuli (Kaye, 1978 Vision Research 18 1013 - 1022; Wilcox et al, 2007 Vision Research 47 in press). Tolerance to vertical misalignment has been used as evidence against an epipolar constraint in human stereopsis; the presence of monoptic depth cues in such stimuli suggests that the issue is unresolved. [Supported by Natural Sciences and Engineering Research Council of Canada and CIHR Training Grant in Vision Health Research.] \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binocular depth discrimination and estimation beyond interaction space.\n \n \n \n \n\n\n \n Allison, R., Gillam, B., & Vecellio, E.\n\n\n \n\n\n\n In Journal of Vision, volume 7, pages 817-817. 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Binocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2007817-817,\n\tabstract = {The benefits of binocular vision have been debated throughout the history of vision science yet few studies have considered its contribution beyond a viewing distance of a few metres. What benefit, if any, does binocular vision confer for distance vision? Elementary texts commonly assert that stereopsis is ineffective beyond modest distances despite theoretical analysis suggesting a much larger effective range. We compared monocular and binocular performance on depth interval estimation and discrimination tasks at and beyond 4.5m. Stimuli consisted of a combination of: 1) the reference stimulus, a smoothly finished wooden architectural panel, mounted upright, and facing the subject, 2) the test stimulus, a thin rod that could be precisely moved in depth and 3) a homogeneous background. An aperture prevented view of the top and bottom of the stimuli. Subjects made verbal, signed estimates in cm of the depth between the test and reference stimuli. On each trial, the depth was set between $\\pm 100$cm. Observers viewed the displays either monocularly or binocularly from 4.5, 9.0 or 18.0m. Depth discrimination at 9.0 m was also evaluated using adaptive staircase procedures. Regression analysis provided measures of the scaling between perceived depth and actual depth (the `gain') and the precision. Under monocular conditions, perceived depth was significantly compressed. Binocular depth estimates were much nearer to veridical although also compressed. Both raw precision measures and those normalized by the gain were much smaller for binocular compared to monocular conditions (ratios between 2.1 and 48). We confirm that stereopsis supports reliable depth discriminations beyond typical laboratory distances. Furthermore, binocular vision can significantly improve both the accuracy and precision of depth estimation to at least 18m. We will discuss additional experiments to extend these results to larger viewing distances and to evaluate the contribution of stereopsis under rich cue conditions.},\n\tauthor = {Allison, Robert and Gillam, Barbara and Vecellio, Elia},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:50:04 -0400},\n\tdoi = {10.1167/7.9.817},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {9},\n\tpages = {817-817},\n\ttitle = {Binocular depth discrimination and estimation beyond interaction space},\n\turl-1 = {http://dx.doi.org/10.1167/7.9.817},\n\tvolume = {7},\n\tyear = {2007},\n\turl-1 = {https://doi.org/10.1167/7.9.817}}\n\n
\n
\n\n\n
\n The benefits of binocular vision have been debated throughout the history of vision science yet few studies have considered its contribution beyond a viewing distance of a few metres. What benefit, if any, does binocular vision confer for distance vision? Elementary texts commonly assert that stereopsis is ineffective beyond modest distances despite theoretical analysis suggesting a much larger effective range. We compared monocular and binocular performance on depth interval estimation and discrimination tasks at and beyond 4.5m. Stimuli consisted of a combination of: 1) the reference stimulus, a smoothly finished wooden architectural panel, mounted upright, and facing the subject, 2) the test stimulus, a thin rod that could be precisely moved in depth and 3) a homogeneous background. An aperture prevented view of the top and bottom of the stimuli. Subjects made verbal, signed estimates in cm of the depth between the test and reference stimuli. On each trial, the depth was set between $± 100$cm. Observers viewed the displays either monocularly or binocularly from 4.5, 9.0 or 18.0m. Depth discrimination at 9.0 m was also evaluated using adaptive staircase procedures. Regression analysis provided measures of the scaling between perceived depth and actual depth (the `gain') and the precision. Under monocular conditions, perceived depth was significantly compressed. Binocular depth estimates were much nearer to veridical although also compressed. Both raw precision measures and those normalized by the gain were much smaller for binocular compared to monocular conditions (ratios between 2.1 and 48). We confirm that stereopsis supports reliable depth discriminations beyond typical laboratory distances. Furthermore, binocular vision can significantly improve both the accuracy and precision of depth estimation to at least 18m. We will discuss additional experiments to extend these results to larger viewing distances and to evaluate the contribution of stereopsis under rich cue conditions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Toward the ''Cognitive Cockpit'': Flight Test Platforms and Methods for Monitoring Pilot Mental State.\n \n \n \n\n\n \n Craig, G., Erdos, E., Carignan, S., Jennings, S., Swail, C., Ellis, K., Gubbels, A. W., Macuda, T., Schnell, T., Poolman, P., Allison, R., & Cheung, R.\n\n\n \n\n\n\n In 78th Annual Scientific Meeting of the Aerospace Medical Association, 05 2007. Aerospace Medical Association\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Craig:2007lg,\n\tauthor = {Craig, G. and Erdos, E. and Carignan, S. and Jennings, S. and Swail, C. and Ellis, K. and Gubbels, A. W. and Macuda, T. and Schnell, T. and Poolman, P. and Allison, R.S. and Cheung, R.},\n\tbooktitle = {78th Annual Scientific Meeting of the Aerospace Medical Association},\n\tdate-added = {2011-05-09 13:56:59 -0400},\n\tdate-modified = {2011-05-18 16:23:28 -0400},\n\tkeywords = {Neural Avionics},\n\tmonth = {05},\n\torganization = {Aerospace Medical Association},\n\ttitle = {Toward the ''Cognitive Cockpit'': Flight Test Platforms and Methods for Monitoring Pilot Mental State},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Assessing Night Vision Goggle Performance in Security Applications.\n \n \n \n \n\n\n \n Allison, R., Guterman, P., Jennings, S., Macuda, T., Sakano, Y., Thomas, P., & Zacher, J\n\n\n \n\n\n\n In Aerospace in Canada: Research and Innovation for the Global Marketplace, 2007 AERO Conference (Flight Test Methods Symposium), Toronto, Canada, April 24th-26th 2007. \n \n\n\n\n
\n\n\n\n \n \n \"Assessing-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Allison:2007zl,\n\tabstract = {Police and border security operations are an important and growing application of night vision devices (NVDs). NVDs improve visibility at night but suffer from a variety of perceptual artifacts and human factors issues. In a series of helicopter-based flight trials we analyzed subject performance on model tasks based on typical security applications. Subjects performed the tasks under conditions of unaided daytime vision, unaided nighttime vision or image intensified nighttime\nvision. The tasks included directed search over open and forested terrain, detection and identification of a temporary landing zone and search/tracking of a moving vehicle marked with a covert IR marker. The results of this study confirm that NVDs can provide significant operational value but also illustrate the limitations of the technology},\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S. and Guterman, P. and Jennings, S. and Macuda, T. and Sakano, Y. and Thomas, P. and Zacher, J},\n\tbooktitle = {Aerospace in Canada: Research and Innovation for the Global Marketplace, 2007 AERO Conference (Flight Test Methods Symposium)},\n\tdate-added = {2011-05-06 13:11:02 -0400},\n\tdate-modified = {2011-05-18 15:43:56 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {April 24th-26th},\n\ttitle = {Assessing Night Vision Goggle Performance in Security Applications},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/casi paper.pdf},\n\tyear = {2007}}\n\n
\n
\n\n\n
\n Police and border security operations are an important and growing application of night vision devices (NVDs). NVDs improve visibility at night but suffer from a variety of perceptual artifacts and human factors issues. In a series of helicopter-based flight trials we analyzed subject performance on model tasks based on typical security applications. Subjects performed the tasks under conditions of unaided daytime vision, unaided nighttime vision or image intensified nighttime vision. The tasks included directed search over open and forested terrain, detection and identification of a temporary landing zone and search/tracking of a moving vehicle marked with a covert IR marker. The results of this study confirm that NVDs can provide significant operational value but also illustrate the limitations of the technology\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of image intensifier halo on perceived layout.\n \n \n \n \n\n\n \n Zacher, J. E., Brandwood, T., Thomas, P., Vinnikov, M., Xu, G., Jennings, S., Macuda, T., Palmisano, S. A., Craig, G., Wilcox, L., & Allison, R.\n\n\n \n\n\n\n In Brown, R. W., Reese, C. E., Marasco, P. L., & Harding, T. H., editor(s), Head- and Helmet-Mounted Displays XII: Design and Applications, volume 6557, of Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE), pages U5570-U5570, Bellingham, 2007. Spie-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n \n \"Effects-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2007U5570-U5570,\n\tabstract = {Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo. In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was only weakly dependent on both source intensity and distance although halo intensity did vary with effective source intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and of the ability of human operators to compensate for perceptual distortions.},\n\taddress = {Bellingham},\n\tauthor = {Zacher, J. E. and Brandwood, T. and Thomas, P. and Vinnikov, M. and Xu, G. and Jennings, S. and Macuda, T. and Palmisano, S. A. and Craig, G. and Wilcox, L. and Allison, R.S.},\n\tbooktitle = {Head- and Helmet-Mounted Displays XII: Design and Applications},\n\tdate-modified = {2012-07-02 22:23:11 -0400},\n\tdoi = {10.1117/12.719892},\n\teditor = {Brown, R. W. and Reese, C. E. and Marasco, P. L. and Harding, T. H.},\n\tkeywords = {Night Vision},\n\tpages = {U5570-U5570},\n\tpublisher = {Spie-Int Soc Optical Engineering},\n\tseries = {Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE)},\n\ttitle = {Effects of image intensifier halo on perceived layout},\n\turl-1 = {http://dx.doi.org/10.1117/12.719892},\n\turl-2 = {http://dx.doi.org/10.1117/12.719892},\n\tvolume = {6557},\n\tyear = {2007},\n\turl-1 = {https://doi.org/10.1117/12.719892}}\n\n
\n
\n\n\n
\n Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo. In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was only weakly dependent on both source intensity and distance although halo intensity did vary with effective source intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and of the ability of human operators to compensate for perceptual distortions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variability-aware latency amelioration in distributed environments.\n \n \n \n \n\n\n \n Tumanov, A., Allison, R., & Stuerzlinger, W.\n\n\n \n\n\n\n In Sherman, W., Lin, M., & Steed, A., editor(s), Proceedings of IEEE Virtual Reality 2007, pages 123-130, New York, 2007. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Variability-aware-1\n  \n \n \n \"Variability-aware-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2007123-130,\n\tabstract = {Application designers of collaborative distributed Virtual Environments must account for the influence of the network connection and its detrimental effects on user performance. Based upon analysis and classification of existing latency compensation techniques, this paper introduces a novel approach to latency amelioration in the form of a two-tier predictor-estimator framework. The technique is variability-aware due to its proactive sender-side prediction of a pose a variable time into the future. The prediction interval required is estimated based on current and past network delay characteristics. This latency estimate is subsequently used by a Kalman Filter-based predictor to replace the measurement event with a predicted pose that matches the event's arrival time at the receiving workstation. The compensation technique was evaluated in a simulation through an offline playback of real head motion data and network delay traces collected under a variety of real network conditions. The experimental results indicate that the variability-aware approach significantly outperforms a state-of-the-art one, which assumes a constant system delay.},\n\taddress = {New York},\n\tauthor = {Tumanov, A. and Allison, R.S. and Stuerzlinger, W.},\n\tbooktitle = {Proceedings of IEEE Virtual Reality 2007},\n\tdate-modified = {2011-05-11 13:23:56 -0400},\n\tdoi = {10.1109/VR.2007.352472},\n\teditor = {Sherman, W. and Lin, M. and Steed, A.},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {123-130},\n\tpublisher = {IEEE},\n\ttitle = {Variability-aware latency amelioration in distributed environments},\n\turl-1 = {http://dx.doi.org/10.1109/VR.2007.352472},\n\turl-2 = {http://dx.doi.org/10.1109/VR.2007.352472},\n\tyear = {2007},\n\turl-1 = {https://doi.org/10.1109/VR.2007.352472}}\n\n
\n
\n\n\n
\n Application designers of collaborative distributed Virtual Environments must account for the influence of the network connection and its detrimental effects on user performance. Based upon analysis and classification of existing latency compensation techniques, this paper introduces a novel approach to latency amelioration in the form of a two-tier predictor-estimator framework. The technique is variability-aware due to its proactive sender-side prediction of a pose a variable time into the future. The prediction interval required is estimated based on current and past network delay characteristics. This latency estimate is subsequently used by a Kalman Filter-based predictor to replace the measurement event with a predicted pose that matches the event's arrival time at the receiving workstation. The compensation technique was evaluated in a simulation through an offline playback of real head motion data and network delay traces collected under a variety of real network conditions. The experimental results indicate that the variability-aware approach significantly outperforms a state-of-the-art one, which assumes a constant system delay.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of an NVG model with experiments to elucidate temporal behaviour.\n \n \n \n \n\n\n \n Thomas, P. J., Allison, R., Jennings, S., Macuda, T., Zacher, J., Mehbratu, H., & Hornsey, R.\n\n\n \n\n\n\n In Brown, R. W., Reese, C. E., Marasco, P. L., & Harding, T. H., editor(s), Proceedings of SPIE - The International Society for Optical Engineering, volume 6557, pages 1-8, 2007. International Society for Optics and Photonics\n \n\n\n\n
\n\n\n\n \n \n \"Comparison-1\n  \n \n \n \"Comparison-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison20071-8,\n\tabstract = {Expected temporal effects in a night vision goggle (NVG) include the fluorescence time constant, charge depletion at high signal levels, the response time of the automatic gain control (AGC) and other internal modulations in the NVG. There is also the possibility of physical damage or other non-reversible effects in response to large transient signals. To study the temporal behaviour of an NVG, a parametric Matlab model has been created. Of particular interest in the present work was the variation of NVG gain, induced by its automatic gain control (AGC), after a short, intense pulse of light. To verify the model, the reduction of gain after a strong pulse was investigated experimentally using a simple technique. Preliminary laboratory measurements were performed using this technique. The experimental methodology is described, along with preliminary validation data.\nKeywords: Night Vision Goggles, NVG, automatic gain control, AGC, modeling, temporal behaviour},\n\tauthor = {Thomas, P. J. and Allison, R.S. and Jennings, S. and Macuda, T. and Zacher, J. and Mehbratu, H. and Hornsey, R.},\n\tbooktitle = {Proceedings of SPIE - The International Society for Optical Engineering},\n\tdate-modified = {2012-07-02 22:23:38 -0400},\n\tdoi = {10.1117/12.719685},\n\teditor = {Randall W. Brown and Colin E. Reese and Peter L. Marasco and Thomas H. Harding},\n\tkeywords = {Night Vision},\n\torganization = {International Society for Optics and Photonics},\n\tpages = {1-8},\n\ttitle = {Comparison of an NVG model with experiments to elucidate temporal behaviour},\n\turl-1 = {http://dx.doi.org/10.1117/12.719685},\n\turl-2 = {http://dx.doi.org/10.1117/12.719685},\n\tvolume = {6557},\n\tyear = {2007},\n\turl-1 = {https://doi.org/10.1117/12.719685}}\n\n
\n
\n\n\n
\n Expected temporal effects in a night vision goggle (NVG) include the fluorescence time constant, charge depletion at high signal levels, the response time of the automatic gain control (AGC) and other internal modulations in the NVG. There is also the possibility of physical damage or other non-reversible effects in response to large transient signals. To study the temporal behaviour of an NVG, a parametric Matlab model has been created. Of particular interest in the present work was the variation of NVG gain, induced by its automatic gain control (AGC), after a short, intense pulse of light. To verify the model, the reduction of gain after a strong pulse was investigated experimentally using a simple technique. Preliminary laboratory measurements were performed using this technique. The experimental methodology is described, along with preliminary validation data. Keywords: Night Vision Goggles, NVG, automatic gain control, AGC, modeling, temporal behaviour\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Successful Flight Trials Gatekeeper Viperfish Digital Video Recorder: Technology Transfer through National Laboratory Infrastructure.\n \n \n \n\n\n \n Macuda, T., Craig, G., Swail, C., Gubbels, A. W., Ellis, K., Jennings, S., Carignan, S., Erdos, R., Allison, R., Byrne, A., Schnell, T., & Edgar, T.\n\n\n \n\n\n\n Technical Report FRL-2007-0007, National Research Council Canada, Institute for Aerospace Research, 2007.\n NRC Institute for Aerospace Research\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{allison2007,\n\tauthor = {Macuda, T. and Craig, G. and Swail, C. and Gubbels, A. W. and Ellis, K. and Jennings, S. and Carignan, S. and Erdos, R. and Allison, R.S. and Byrne, A. and Schnell, T. and Edgar, T.},\n\tdate-modified = {2012-07-02 19:54:51 -0400},\n\tinstitution = {National Research Council Canada, Institute for Aerospace Research},\n\tkeywords = {Misc.},\n\tnote = {NRC Institute for Aerospace Research},\n\tnumber = {FRL-2007-0007},\n\ttitle = {Successful Flight Trials Gatekeeper Viperfish Digital Video Recorder: Technology Transfer through National Laboratory Infrastructure},\n\tyear = {2007}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2006\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Personal space in virtual reality.\n \n \n \n \n\n\n \n Wilcox, L., Allison, R., Elfassy, S., & Grelik, C.\n\n\n \n\n\n\n ACM Transactions on Applied Perception (TAP), 3(4): 412-418. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"Personal-1\n  \n \n \n \"Personal-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Wilcox:2006am,\n\tabstract = {Improving the sense of ``presence'' is a common goal of three-dimensional (3D) display technology for film, television, and virtual reality. However, there are instances in which 3D presentations may elicit unanticipated negative responses. For example, it is well established that violations of interpersonal space cause discomfort in real-world situations. Here we ask if people respond\nsimilarly when viewing life-sized stereoscopic images. Observers rated their level of comfort in response to animate and inanimate objects in live and virtual (stereoscopic projection) viewing conditions. Electrodermal activity was also recorded to monitor their physiological response to these stimuli. Observers exhibited significant negative reactions to violations of interpersonal space in\nstereoscopic 3D displays, which were equivalent to those experienced in the natural environment. These data have important implications for the creation of 3D media and the use of virtual reality systems.},\n\tauthor = {Wilcox, L. and Allison, R.S. and Elfassy, S. and Grelik, C.},\n\tdate-added = {2011-05-06 12:07:33 -0400},\n\tdate-modified = {2012-07-02 19:07:07 -0400},\n\tdoi = {10.1145/1190036.1190041},\n\tjournal = {{ACM} Transactions on Applied Perception (TAP)},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {4},\n\tpages = {412-418},\n\ttitle = {Personal space in virtual reality},\n\turl-1 = {http://dx.doi.org/10.1145/1190036.1190041},\n\turl-2 = {http://dx.doi.org/10.1145/1190036.1190041},\n\tvolume = {3},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1145/1190036.1190041}}\n\n
\n
\n\n\n
\n Improving the sense of ``presence'' is a common goal of three-dimensional (3D) display technology for film, television, and virtual reality. However, there are instances in which 3D presentations may elicit unanticipated negative responses. For example, it is well established that violations of interpersonal space cause discomfort in real-world situations. Here we ask if people respond similarly when viewing life-sized stereoscopic images. Observers rated their level of comfort in response to animate and inanimate objects in live and virtual (stereoscopic projection) viewing conditions. Electrodermal activity was also recorded to monitor their physiological response to these stimuli. Observers exhibited significant negative reactions to violations of interpersonal space in stereoscopic 3D displays, which were equivalent to those experienced in the natural environment. These data have important implications for the creation of 3D media and the use of virtual reality systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The hedgehog: a novel optical tracking method for spatially immersive displays.\n \n \n \n \n\n\n \n Vorozcovs, A., Stuerzlinger, W., Hogue, A., & Allison, R.\n\n\n \n\n\n\n Presence, 15(1): 108-21. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2006108-21,\n\tabstract = {Existing commercial technologies do not adequately meet the requirements for tracking in fully enclosed virtual reality displays. We present a novel six degree of freedom tracking system, the hedgehog; which overcomes several limitations inherent in existing sensors and tracking technology. The system reliably estimates the pose of the user's head with high resolution and low spatial distortion. Light emitted from an arrangement of lasers projects onto the display walls. An arrangement of cameras images the walls and the two-dimensional centroids of the projections are tracked to estimate the pose of the device. The system is able to handle ambiguous laser projection configurations, static and dynamic occlusions of the lasers, and incorporates an auto-calibration mechanism due to the use of the SCAAT (single constraint at a time) algorithm. A prototype system was evaluated relative to a state-of-the-art motion tracker and showed comparable positional accuracy (1-2 mm RMS) and significantly better absolute angular accuracy (0.1 &deg; RMS)},\n\tauthor = {Vorozcovs, A. and Stuerzlinger, W. and Hogue, A. and Allison, R.S.},\n\tdate-modified = {2012-07-02 19:05:47 -0400},\n\tdoi = {10.1162/pres.2006.15.1.108},\n\tjournal = {Presence},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {1},\n\tpages = {108-21},\n\ttitle = {The hedgehog: a novel optical tracking method for spatially immersive displays},\n\turl-1 = {http://dx.doi.org/10.1162/pres.2006.15.1.108},\n\turl-2 = {http://dx.doi.org/10.1162/pres.2006.15.1.108},\n\tvolume = {15},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1162/pres.2006.15.1.108}}\n\n
\n
\n\n\n
\n Existing commercial technologies do not adequately meet the requirements for tracking in fully enclosed virtual reality displays. We present a novel six degree of freedom tracking system, the hedgehog; which overcomes several limitations inherent in existing sensors and tracking technology. The system reliably estimates the pose of the user's head with high resolution and low spatial distortion. Light emitted from an arrangement of lasers projects onto the display walls. An arrangement of cameras images the walls and the two-dimensional centroids of the projections are tracked to estimate the pose of the device. The system is able to handle ambiguous laser projection configurations, static and dynamic occlusions of the lasers, and incorporates an auto-calibration mechanism due to the use of the SCAAT (single constraint at a time) algorithm. A prototype system was evaluated relative to a state-of-the-art motion tracker and showed comparable positional accuracy (1-2 mm RMS) and significantly better absolute angular accuracy (0.1 ° RMS)\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of decorrelation on 3-D grating detection with static and dynamic random-dot stereograms.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 46(1-2): 57-71. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n \n \"Effect-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison200657-71,\n\tabstract = {Three experiments examined the effects of image decorrelation on the stereoscopic detection of sinusoidal depth gratings in static and dynamic random-dot stereograms (RDS). Detection was found to tolerate greater levels of image decorrelation as: (i) density increased from 23 to 676dots/deg<sup>2</sup>; (ii) spatial frequency decreased from 0.88 to 0.22cpd; (iii) amplitude increased above 0.5arcmin; and (iv) dot lifetime decreased from 1.6s (static RDS) to 80ms (dynamic RDS). In each case, the specific pattern of tolerance to decorrelation could be explained by its consequences for image sampling, filtering, and the influence of depth noise. [All rights reserved Elsevier]},\n\tauthor = {Palmisano, S. and Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2011-05-10 14:48:02 -0400},\n\tdoi = {10.1016/j.visres.2005.10.005},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {1-2},\n\tpages = {57-71},\n\ttitle = {Effect of decorrelation on 3-D grating detection with static and dynamic random-dot stereograms},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2005.10.005},\n\turl-2 = {http://dx.doi.org/10.1016/j.visres.2005.10.005},\n\tvolume = {46},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1016/j.visres.2005.10.005}}\n\n
\n
\n\n\n
\n Three experiments examined the effects of image decorrelation on the stereoscopic detection of sinusoidal depth gratings in static and dynamic random-dot stereograms (RDS). Detection was found to tolerate greater levels of image decorrelation as: (i) density increased from 23 to 676dots/deg2; (ii) spatial frequency decreased from 0.88 to 0.22cpd; (iii) amplitude increased above 0.5arcmin; and (iv) dot lifetime decreased from 1.6s (static RDS) to 80ms (dynamic RDS). In each case, the specific pattern of tolerance to decorrelation could be explained by its consequences for image sampling, filtering, and the influence of depth noise. [All rights reserved Elsevier]\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Illusory scene distortion occurs during perceived self-rotation in roll.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 46(23): 4048-58. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"Illusory-1\n  \n \n \n \"Illusory-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20064048-58,\n\tabstract = {We report a novel illusory distortion of the visual scene, which became apparent during both: (i) observer rotation inside a furnished stationary room; and (ii) room rotation about the stationary observer. While this distortion had several manifestations, the most common experience was that scenery near fixation appeared to sometimes lead and other times lag more peripheral scenery. Across a series of experiments, we eliminated explanations based on eye-movements, distance misperception, peripheral aliasing, differential motion sensitivity and adaptation. We found that these illusory scene distortions occurred only when the observer perceived (real or illusory) changes in self-tilt and maintained a stable fixation. [All rights reserved Elsevier]},\n\tauthor = {Palmisano, S. and Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2011-05-11 13:18:05 -0400},\n\tdoi = {10.1016/j.visres.2006.07.020},\n\tjournal = {Vision Research},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {23},\n\tpages = {4048-58},\n\ttitle = {Illusory scene distortion occurs during perceived self-rotation in roll},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2006.07.020},\n\turl-2 = {http://dx.doi.org/10.1016/j.visres.2006.07.020},\n\tvolume = {46},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1016/j.visres.2006.07.020}}\n\n
\n
\n\n\n
\n We report a novel illusory distortion of the visual scene, which became apparent during both: (i) observer rotation inside a furnished stationary room; and (ii) room rotation about the stationary observer. While this distortion had several manifestations, the most common experience was that scenery near fixation appeared to sometimes lead and other times lag more peripheral scenery. Across a series of experiments, we eliminated explanations based on eye-movements, distance misperception, peripheral aliasing, differential motion sensitivity and adaptation. We found that these illusory scene distortions occurred only when the observer perceived (real or illusory) changes in self-tilt and maintained a stable fixation. [All rights reserved Elsevier]\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Evaluation of Night Vision Technologies using Laboratory and Flight Tests.\n \n \n \n\n\n \n Macuda, T., Craig, G., Jennings, S., Carignan, S., Erdos, R., Brulotte, M., & Allison, R.\n\n\n \n\n\n\n In South Australian Night Vision Conference, of South Australian Night Vision Conference, Nov. 20-23, 2006. 2006.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{macuda_evaluation_????,\n\tauthor = {Macuda, T. and Craig, G. and Jennings, S. and Carignan, S. and Erdos, R. and Brulotte, M. and Allison, R.},\n\tbooktitle = {South Australian Night Vision Conference},\n\tdate-added = {2012-03-11 20:48:28 -0400},\n\tdate-modified = {2012-03-11 20:50:35 -0400},\n\tkeywords = {Night Vision},\n\tseries = {South Australian Night Vision Conference, Nov. 20-23, 2006},\n\ttitle = {Evaluation of Night Vision Technologies using Laboratory and Flight Tests},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Assessing the Impact of NVGs on Vision and Flight Performance using Field, Flight, Laboratory and Aeromedical Approaches.\n \n \n \n\n\n \n Macuda, T., Craig, G., Erdos, R., Carignan, S., Jennings, S., Allison, R., & Brulotte, M.\n\n\n \n\n\n\n In Night Vision Technology Forum. Old Windsor, UK, 03 2006.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Macuda:2006wy,\n\taddress = {Old Windsor, UK},\n\tauthor = {Macuda, T. and Craig, G. and Erdos, R. and Carignan, S. and Jennings, S. and Allison, R.S. and Brulotte, M.},\n\tbooktitle = {Night Vision Technology Forum},\n\tdate-added = {2011-05-09 14:08:32 -0400},\n\tdate-modified = {2011-09-12 22:23:04 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {03},\n\ttitle = {Assessing the Impact of NVGs on Vision and Flight Performance using Field, Flight, Laboratory and Aeromedical Approaches},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Evaluation of Night Vision Technologies using Laboratory and Flight Tests.\n \n \n \n\n\n \n Macuda, T., Craig, G., Jennings, S., Carignan, S., Erdos, R., Brulotte, M., & Allison, R\n\n\n \n\n\n\n In South Australian Night Vision Conference. Adelaide, Australia, 11 2006.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Macuda:2006kv,\n\taddress = {Adelaide, Australia},\n\tauthor = {Macuda, T. and Craig, G. and Jennings, S. and Carignan, S. and Erdos, R. and Brulotte, M. and Allison, R},\n\tbooktitle = {South Australian Night Vision Conference},\n\tdate-added = {2011-05-09 14:07:20 -0400},\n\tdate-modified = {2011-09-12 22:22:43 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {11},\n\ttitle = {Evaluation of Night Vision Technologies using Laboratory and Flight Tests},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Multidisciplinary Approach Towards Assessing Night Vision Technologies at the Flight Research Laboratory: Integrating Laboratory, Field, Flight and Aeromedical approaches.\n \n \n \n \n\n\n \n Macuda, T., Craig, G., Brulotte, M., Allison, R., Crowell, B., Filiter, D., Lester, G., Healey, G., Erdos, R., Carignan, S., Tang, D., Truong, L., Langille, A., Hamilton, K., Souvestre, P., & Jennings, S.\n\n\n \n\n\n\n In Comox Night Vision Conference. Comox, Brittish Colombia, 11 2006.\n \n\n\n\n
\n\n\n\n \n \n \"A-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Macuda:2006sh,\n\tabstract = {A Multidisciplinary Approach Towards Assessing Night Vision Technologies at the Flight Research Laboratory: Integrating Laboratory, Field, Flight and Aeromedical approaches.\n\nTodd Macuda, Greg Craig, Michel Brulotte, Rob Allison, Bob Crowell, Don Filiter, Greg Lester, Scott Healey, Rob Erdos, Stephan Carignan, Denis Tang, Long Truong, Andrew Langille, Kelvin Hamilton, Philippe Souvestre and Sion Jennings\n\nThe current presentation is a broad overview of the Night Vision Assessment facilities at the Flight Research Laboratory of the Institute for Aerospace Research of the National Research Council of Canada. This unique cadre of researchers consists of a broad Canadian core group of research scientists, pilots, operational military and law enforcement personnel and physicians\nthat includes several government and private organizations. The initial portion of the presentation shall address the overall assessment of three tube technologies using flight and laboratory methods. This is a summary of recent laboratory and flight trials with the Canadian Forces. The results shall be summarized in terms of laboratory derived estimates of visual acuity, and in-flight test trials by NRC pilots using these night vision technologies. We will discuss this methodological approach as a model and example for assessment of night vision technologies for use in real operational flight environments. Further, these results will consider\nthe importance of providing nominal behavioural performance values for flight technologies. The second portion of our presentation will consider the broader capabilities and activities of this core night vision research group at the Flight Research. We will discuss current and planned efforts aimed at developing civil certification policies for the use of NVGs, suppression of forest fires using NVGs, the use of NVGs for Canadian law enforcement air wings, and the use of medical procedures to assess the impact of NVGs and related technologies in flight\nenvironments. The final phase of the talk will summarize our capability as a unique flight research facility capable of supporting a broad range of military and paramilitary needs. FRL can be considered as a distinctive Canadian research facility capable of supporting a broad range of operational flight needs in defence, law enforcement and related civilian environments.},\n\taddress = {Comox, Brittish Colombia},\n\tauthor = {Macuda, T. and Craig, G. and Brulotte, M. and Allison, R.S. and Crowell, B. and Filiter, D. and Lester, G. and Healey, G. and Erdos, R. and Carignan, S. and Tang, D. and Truong, L. and Langille, A. and Hamilton, K. and Souvestre, P. and Jennings, S.},\n\tbooktitle = {Comox Night Vision Conference},\n\tdate-added = {2011-05-06 16:44:18 -0400},\n\tdate-modified = {2011-05-18 15:40:23 -0400},\n\tkeywords = {Night Vision},\n\tmonth = {11},\n\ttitle = {A Multidisciplinary Approach Towards Assessing Night Vision Technologies at the Flight Research Laboratory: Integrating Laboratory, Field, Flight and Aeromedical approaches},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Nightvisionconference comox.pdf},\n\tyear = {2006}}\n\n
\n
\n\n\n
\n A Multidisciplinary Approach Towards Assessing Night Vision Technologies at the Flight Research Laboratory: Integrating Laboratory, Field, Flight and Aeromedical approaches. Todd Macuda, Greg Craig, Michel Brulotte, Rob Allison, Bob Crowell, Don Filiter, Greg Lester, Scott Healey, Rob Erdos, Stephan Carignan, Denis Tang, Long Truong, Andrew Langille, Kelvin Hamilton, Philippe Souvestre and Sion Jennings The current presentation is a broad overview of the Night Vision Assessment facilities at the Flight Research Laboratory of the Institute for Aerospace Research of the National Research Council of Canada. This unique cadre of researchers consists of a broad Canadian core group of research scientists, pilots, operational military and law enforcement personnel and physicians that includes several government and private organizations. The initial portion of the presentation shall address the overall assessment of three tube technologies using flight and laboratory methods. This is a summary of recent laboratory and flight trials with the Canadian Forces. The results shall be summarized in terms of laboratory derived estimates of visual acuity, and in-flight test trials by NRC pilots using these night vision technologies. We will discuss this methodological approach as a model and example for assessment of night vision technologies for use in real operational flight environments. Further, these results will consider the importance of providing nominal behavioural performance values for flight technologies. The second portion of our presentation will consider the broader capabilities and activities of this core night vision research group at the Flight Research. We will discuss current and planned efforts aimed at developing civil certification policies for the use of NVGs, suppression of forest fires using NVGs, the use of NVGs for Canadian law enforcement air wings, and the use of medical procedures to assess the impact of NVGs and related technologies in flight environments. The final phase of the talk will summarize our capability as a unique flight research facility capable of supporting a broad range of military and paramilitary needs. FRL can be considered as a distinctive Canadian research facility capable of supporting a broad range of operational flight needs in defence, law enforcement and related civilian environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Effects of Adaptation Duration and Interocular Spatial Correlation of the Adaptation Stimulus on the Duration of Motion Aftereffect in Depth.\n \n \n \n\n\n \n Sakano, Y., & Allison, R.\n\n\n \n\n\n\n In The Journal of the Vision Society of Japan (The 4th Asian Conference on Vision, Matsue, Shimane, Japan). 2006.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sakano:2006gd,\n\tabstract = {1. Introduction\n\nTheoretically, there are at least two possible binocular cues to motion-in-depth, namely disparity change over time and interocular velocity differences. We previously reported that a motion aftereffect (MAE) in depth occurred after adaptation to motion-in-depth in random-element stereograms that contained interocular velocity differences. Moreover, the duration of MAE in depth following adaptation stimuli without spatially coherent disparities did not differ significantly from that following adaptation to stimuli with coherent disparities (VSS 2005). It is possible that equivalent duration of the MAEs in depth under these two conditions reflects saturation of MAE in depth caused by the long adaptation phase (2 min). In the present study, we test this directly via measurements of the duration of MAE in depth for a variety of adaptation durations.\n\n2. Methods\n\nThe adaptation stimulus consisted of random-dot stereograms that depicted two frontoparallel planes, one above and one below the fixation point. The two planes repeatedly moved in depth in opposite directions for 7.5 sec, 15 sec, 30 sec, 1 min, 2 min or 4 min. The dots of the adaptation stimulus were spatially and temporally correlated in the two eyes (RDS) or spatially uncorrelated but temporally correlated (URDS). Thus, both RDS and URDS contained the same amount of interocular velocity differences while only RDS had coherent disparity. The test stimulus was a stationary version of the spatially and temporally correlated adaptation stimulus. The subjects pressed a key when the illusory motion-in-depth of the test stimulus (MAE in depth) ceased.\n\n3. Results and discussion\n\nUnder both RDS and URDS adaptation conditions, a MAE in depth occurred. The duration of the MAE in depth increased as the adaptation duration increased. On the other hand, there was no difference in the duration of the MAE in depth between the RDS and URDS adaptation conditions. These results support the idea that there are mechanisms to see motion-in-depth based on interocular velocity differences, and adaptation to interocular velocity differences, not to changing disparity, is responsible for MAE in depth.\n\nAcknowledgement The support of Province of Ontario (Premier's Research Excellence Award) and NSERC (Canada) are greatly appreciated.},\n\tauthor = {Sakano, Y. and Allison, R.S.},\n\tbooktitle = {The Journal of the Vision Society of Japan (The 4th Asian Conference on Vision, Matsue, Shimane, Japan)},\n\tdate-added = {2011-05-06 16:42:39 -0400},\n\tdate-modified = {2013-12-28 15:01:04 +0000},\n\tkeywords = {Eye Movements & Tracking},\n\ttitle = {The Effects of Adaptation Duration and Interocular Spatial Correlation of the Adaptation Stimulus on the Duration of Motion Aftereffect in Depth},\n\tyear = {2006}}\n\n
\n
\n\n\n
\n 1. Introduction Theoretically, there are at least two possible binocular cues to motion-in-depth, namely disparity change over time and interocular velocity differences. We previously reported that a motion aftereffect (MAE) in depth occurred after adaptation to motion-in-depth in random-element stereograms that contained interocular velocity differences. Moreover, the duration of MAE in depth following adaptation stimuli without spatially coherent disparities did not differ significantly from that following adaptation to stimuli with coherent disparities (VSS 2005). It is possible that equivalent duration of the MAEs in depth under these two conditions reflects saturation of MAE in depth caused by the long adaptation phase (2 min). In the present study, we test this directly via measurements of the duration of MAE in depth for a variety of adaptation durations. 2. Methods The adaptation stimulus consisted of random-dot stereograms that depicted two frontoparallel planes, one above and one below the fixation point. The two planes repeatedly moved in depth in opposite directions for 7.5 sec, 15 sec, 30 sec, 1 min, 2 min or 4 min. The dots of the adaptation stimulus were spatially and temporally correlated in the two eyes (RDS) or spatially uncorrelated but temporally correlated (URDS). Thus, both RDS and URDS contained the same amount of interocular velocity differences while only RDS had coherent disparity. The test stimulus was a stationary version of the spatially and temporally correlated adaptation stimulus. The subjects pressed a key when the illusory motion-in-depth of the test stimulus (MAE in depth) ceased. 3. Results and discussion Under both RDS and URDS adaptation conditions, a MAE in depth occurred. The duration of the MAE in depth increased as the adaptation duration increased. On the other hand, there was no difference in the duration of the MAE in depth between the RDS and URDS adaptation conditions. These results support the idea that there are mechanisms to see motion-in-depth based on interocular velocity differences, and adaptation to interocular velocity differences, not to changing disparity, is responsible for MAE in depth. Acknowledgement The support of Province of Ontario (Premier's Research Excellence Award) and NSERC (Canada) are greatly appreciated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual asymmetry in stereoscopic transparency.\n \n \n \n \n\n\n \n Tsirlin-Zaharescu, I., Wilcox, L. M., & Allison, R.\n\n\n \n\n\n\n In Perception, volume 35, pages 25-26. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison200625-26,\n\tauthor = {Tsirlin-Zaharescu, I. and Wilcox, L. M. and Allison, R.S.},\n\tbooktitle = {Perception},\n\tdate-modified = {2011-09-12 22:06:29 -0400},\n\tjournal = {Perception},\n\tkeywords = {Stereopsis},\n\tpages = {25-26},\n\ttitle = {Perceptual asymmetry in stereoscopic transparency},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v060587},\n\tvolume = {35},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On seeing transparent surfaces in stereoscopic displays.\n \n \n \n \n\n\n \n Tsirlin, I., Allison, R. S., & Wilcox, L. M.\n\n\n \n\n\n\n In Journal of Vision, volume 6, pages 830-830. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"On-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2006830-830,\n\tabstract = {Transparency presents an extreme challenge to stereoscopic correspondence and surface interpolation, particularly in the case of multiple transparent surfaces in the same visual direction. In this experiment we manipulate density, separation in depth, and number of transparent planes within a single experimental design, to evaluate the constraints on stereoscopic transparency. We use a novel task involving identification of patterned planes among the planes constituting the stimulus. The results show that, under these conditions, (1) subjects are able to perceive up to five transparent surfaces concurrently; (2) the transparency percept is impaired by increasing the texture density; (3) the transparency percept is initially enhanced by increasing the disparity between the surfaces; (4) the percept begins to degrade as disparity between surfaces is further increased beyond an optimal disparity, which is a function of element density. Specifically, at higher texture densities the optimal disparity shifts to smaller values. This interaction between disparity and texture density is surprising, but it can account for discrepancies in the existing literature. We are currently testing extended correlational and feature-based models of stereopsis with our stimuli. This will provide insight into our psychophysical results and a basis for quantitative evaluation of existing computational models.},\n\tauthor = {Tsirlin, Inna and Allison, Robert S. and Wilcox, Laurie M.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 19:02:25 -0400},\n\tdoi = {10.1167/6.6.830},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {6},\n\tpages = {830-830},\n\ttitle = {On seeing transparent surfaces in stereoscopic displays},\n\turl-1 = {http://dx.doi.org/10.1167/6.6.830},\n\tvolume = {6},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1167/6.6.830}}\n\n
\n
\n\n\n
\n Transparency presents an extreme challenge to stereoscopic correspondence and surface interpolation, particularly in the case of multiple transparent surfaces in the same visual direction. In this experiment we manipulate density, separation in depth, and number of transparent planes within a single experimental design, to evaluate the constraints on stereoscopic transparency. We use a novel task involving identification of patterned planes among the planes constituting the stimulus. The results show that, under these conditions, (1) subjects are able to perceive up to five transparent surfaces concurrently; (2) the transparency percept is impaired by increasing the texture density; (3) the transparency percept is initially enhanced by increasing the disparity between the surfaces; (4) the percept begins to degrade as disparity between surfaces is further increased beyond an optimal disparity, which is a function of element density. Specifically, at higher texture densities the optimal disparity shifts to smaller values. This interaction between disparity and texture density is surprising, but it can account for discrepancies in the existing literature. We are currently testing extended correlational and feature-based models of stereopsis with our stimuli. This will provide insight into our psychophysical results and a basis for quantitative evaluation of existing computational models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Aftereffect of motion-in-depth based on binocular cues: No effect of relative disparity between adaptation and test surfaces.\n \n \n \n \n\n\n \n Sakano, Y., Allison, R. S., Howard, I. P., & Sadr, S.\n\n\n \n\n\n\n In Journal of Vision, volume 6, pages 626-626. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"Aftereffect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2006626-626,\n\tabstract = {Previously, we found that a motion aftereffect (MAE) in depth can occur after adaptation to motion-in-depth in random-element stereograms (VSS 2005). In the present study, we investigated the depth selectivity of the MAE in depth. The adaptation stimulus consisted of two frontoparallel surfaces, one above and one below the fixation point. These surfaces were depicted by random-dot stereograms that were temporally correlated (RDS) or uncorrelated (DRDS). During the 2-min adaptation phase, the disparity of one surface increased and that of the other surface decreased linearly and repeatedly to simulate smooth motion-in-depth. The range of these disparity ramps was -26.2 to -8.72, -8.72 to +8.72, or +8.72 to +26.2 arcmin, where positive and negative values indicate crossed and uncrossed disparity. The test stimulus consisted of two stationary frontoparallel surfaces depicted by a RDS with a fixed pedestal disparity of either -17.4, 0, or +17.4 arcmin. Under RDS adaptation conditions, robust MAE in depth occurred. The duration of this MAE in depth did not depend on the relation between the disparity range of the adaptation stimulus and the pedestal disparity of the test stimulus. Under DRDS adaptation conditions, MAE in depth did not occur. These results suggest that the adaptable processes used to detect motion-in-depth from binocular cues are insensitive to pedestal disparity.},\n\tauthor = {Sakano, Yuichi and Allison, Robert S. and Howard, Ian P. and Sadr, Sabnam},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 18:05:24 -0400},\n\tdoi = {10.1167/6.6.626},\n\tjournal = {Journal of Vision},\n\tkeywords = {Motion in depth},\n\tnumber = {6},\n\tpages = {626-626},\n\ttitle = {Aftereffect of motion-in-depth based on binocular cues: No effect of relative disparity between adaptation and test surfaces},\n\turl-1 = {http://dx.doi.org/10.1167/6.6.626},\n\tvolume = {6},\n\tyear = {2006},\n\turl-1 = {https://doi.org/10.1167/6.6.626}}\n\n
\n
\n\n\n
\n Previously, we found that a motion aftereffect (MAE) in depth can occur after adaptation to motion-in-depth in random-element stereograms (VSS 2005). In the present study, we investigated the depth selectivity of the MAE in depth. The adaptation stimulus consisted of two frontoparallel surfaces, one above and one below the fixation point. These surfaces were depicted by random-dot stereograms that were temporally correlated (RDS) or uncorrelated (DRDS). During the 2-min adaptation phase, the disparity of one surface increased and that of the other surface decreased linearly and repeatedly to simulate smooth motion-in-depth. The range of these disparity ramps was -26.2 to -8.72, -8.72 to +8.72, or +8.72 to +26.2 arcmin, where positive and negative values indicate crossed and uncrossed disparity. The test stimulus consisted of two stationary frontoparallel surfaces depicted by a RDS with a fixed pedestal disparity of either -17.4, 0, or +17.4 arcmin. Under RDS adaptation conditions, robust MAE in depth occurred. The duration of this MAE in depth did not depend on the relation between the disparity range of the adaptation stimulus and the pedestal disparity of the test stimulus. Under DRDS adaptation conditions, MAE in depth did not occur. These results suggest that the adaptable processes used to detect motion-in-depth from binocular cues are insensitive to pedestal disparity.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Neural Activity During a Simulated Rotary-Wing Flight Task Using Expert Test Pilots.\n \n \n \n\n\n \n Macuda, T., Poolman, P., Schnell, T., Keller, M., Craig, G., Swail, C., Jennings, S., Erdos, R., Allison, R., Lenert, A., & Carignan, S.\n\n\n \n\n\n\n In Second International Conference on Augmented Cognition, San Francisco, CA, 2006. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Macuda:2006mk,\n\taddress = {San Francisco, CA},\n\tauthor = {Macuda, T. and Poolman, P. and Schnell, T. and Keller, M. and Craig, G. and Swail, C. and Jennings, S. and Erdos, R. and Allison, R.S. and Lenert, A. and Carignan, S.},\n\tbooktitle = {Second International Conference on Augmented Cognition},\n\tdate-added = {2011-05-09 13:59:34 -0400},\n\tdate-modified = {2011-05-18 16:07:20 -0400},\n\tkeywords = {Neural Avionics},\n\ttitle = {Neural Activity During a Simulated Rotary-Wing Flight Task Using Expert Test Pilots},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Toward the ``Cognitive Cockpit'': Flight test platforms and methods for monitoring Pilot Mental State.\n \n \n \n \n\n\n \n Schnell, T., Poolman, P., Macuda, T., Craig, G., Erdos, R., Carignan, S., Cheung, B., Allison, R., Lenert, A., Jennings, S., Swail, C., Ellis, C., & Gubbels, W.\n\n\n \n\n\n\n In Second International Conference on Augmented Cognition, San Francisco, CA, 10 2006. \n \n\n\n\n
\n\n\n\n \n \n \"Toward-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Schnell:2006li,\n\taddress = {San Francisco, CA},\n\tauthor = {Schnell, T. and Poolman, P. and Macuda, T. and Craig, G. and Erdos, R. and Carignan, S. and Cheung, B. and Allison, R.S. and Lenert, A. and Jennings, S. and Swail, C. and Ellis, C. and Gubbels, W.},\n\tbooktitle = {Second International Conference on Augmented Cognition},\n\tdate-added = {2011-05-09 13:16:09 -0400},\n\tdate-modified = {2011-05-18 16:24:16 -0400},\n\tkeywords = {Neural Avionics},\n\tmonth = {10},\n\ttitle = {Toward the {``}Cognitive Cockpit{''}: Flight test platforms and methods for monitoring Pilot Mental State},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/0830best_Schnell_pres.pdf},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Laser Eye Protection (LEP): Candidate Solutions and Understanding Their Impact on Visual and Flight Performance.\n \n \n \n\n\n \n Macuda, T., Craig, G., Jennings, S., Swail, C., Carignan, S., Erdos, R., Nakagarawa, V., Brulotte, M., Healey, S., Lester, G., McAleavey, C., Freeborn, E., & Allison, R.\n\n\n \n\n\n\n Technical Report FRL-2006-0054, National Research Council Canada, NRC Institute for Aerospace Research, 2006.\n NRC Institute for Aerospace Research FRL-2006-0054\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Macuda:2006aw,\n\tauthor = {Macuda, T. and Craig, G. and Jennings, S. and Swail, C. and Carignan, S. and Erdos, R. and Nakagarawa, V. and Brulotte, M. and Healey, S. and Lester, G. and McAleavey, C. and Freeborn, E. and Allison, R.},\n\tdate-modified = {2016-01-03 03:29:39 +0000},\n\tinstitution = {National Research Council Canada, NRC Institute for Aerospace Research},\n\tkeywords = {Misc.},\n\tnote = {NRC Institute for Aerospace Research FRL-2006-0054},\n\tnumber = {FRL-2006-0054},\n\ttitle = {Laser Eye Protection (LEP): Candidate Solutions and Understanding Their Impact on Visual and Flight Performance},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Night Vision Goggle Standard Operating Procedures for the Ontario Ministry of Natural Resources.\n \n \n \n\n\n \n Macuda, T., Craig, G., Jennings, S., Erdos, R., Carignan, S., Healey, S., Miller, J., & Allison, R.\n\n\n \n\n\n\n Technical Report FRL-2006-0008, National Research Council Canada, NRC Institute for Aerospace Research, 2006.\n NRC Institute for Aerospace Research FRL-2006-0008\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Macuda:2006ol,\n\tauthor = {Macuda, T. and Craig, G. and Jennings, S. and Erdos, R. and Carignan, S. and Healey, S. and Miller, J. and Allison, R.},\n\tdate-modified = {2016-01-03 03:29:18 +0000},\n\tinstitution = {National Research Council Canada, NRC Institute for Aerospace Research},\n\tkeywords = {Night Vision},\n\tnote = {NRC Institute for Aerospace Research FRL-2006-0008},\n\tnumber = {FRL-2006-0008},\n\ttitle = {Night Vision Goggle Standard Operating Procedures for the Ontario Ministry of Natural Resources},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Neural Avionics: Development of Airborne Neural Recording Capabilities in Fixed and Rotary Wing Aircraft to Monitor Pilot Mental State.\n \n \n \n\n\n \n Macuda, T., Craig, G., Erdos, R., Carignan, S., Jennings, S., Swail, C., Schnell, T., Poolman, P., Allison, R., & Lenert, A.\n\n\n \n\n\n\n Technical Report FRL-2006-0050, NRC Institute for Aerospace Research, National Research Council Canada, 2006.\n NRC Institute for Aerospace Research FRL-2006-0050\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Macuda:2006kh,\n\tauthor = {Macuda, T. and Craig, G. and Erdos, R. and Carignan, S. and Jennings, S. and Swail, C. and Schnell, T. and Poolman, P. and Allison, R.S. and Lenert, A.},\n\tdate-modified = {2016-01-03 03:28:46 +0000},\n\tinstitution = {NRC Institute for Aerospace Research, National Research Council Canada},\n\tkeywords = {Neural Avionics},\n\tnote = {NRC Institute for Aerospace Research FRL-2006-0050},\n\tnumber = {FRL-2006-0050},\n\ttitle = {Neural Avionics: Development of Airborne Neural Recording Capabilities in Fixed and Rotary Wing Aircraft to Monitor Pilot Mental State},\n\tyear = {2006}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2005\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Detection of the depth order of defocused images.\n \n \n \n \n\n\n \n Nguyen, V. A., Howard, I. P., & Allison, R.\n\n\n \n\n\n\n Vision Research, 45(8): 1003-11. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"Detection-1\n  \n \n \n \"Detection-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20051003-11,\n\tabstract = {The sign of an accommodative response is provided by differences in chromatic aberration between under- and over-accommodated images. We asked whether these differences enable people to judge the depth order of two stimuli in the absence of other depth cues. Two vertical edges separated by an illuminated gap were presented at random relative distances. Exposure was brief, or prolonged with fixed or changing accommodation. The gap was illuminated with tungsten light or monochromatic light. Subjects could detect image blur with brief exposure for both types of light. But they could detect depth order only in tungsten light with long exposure, with or without changes in accommodation. [All rights reserved Elsevier]},\n\tauthor = {Nguyen, V. A. and Howard, I. P. and Allison, R.S.},\n\tdate-modified = {2011-05-10 14:46:26 -0400},\n\tdoi = {10.1016/j.visres.2004.10.015},\n\tjournal = {Vision Research},\n\tkeywords = {Depth perception},\n\tnumber = {8},\n\tpages = {1003-11},\n\ttitle = {Detection of the depth order of defocused images},\n\turl-1 = {http://dx.doi.org/10.1016/j.visres.2004.10.015},\n\turl-2 = {http://dx.doi.org/10.1016/j.visres.2004.10.015},\n\tvolume = {45},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1016/j.visres.2004.10.015}}\n\n
\n
\n\n\n
\n The sign of an accommodative response is provided by differences in chromatic aberration between under- and over-accommodated images. We asked whether these differences enable people to judge the depth order of two stimuli in the absence of other depth cues. Two vertical edges separated by an illuminated gap were presented at random relative distances. Exposure was brief, or prolonged with fixed or changing accommodation. The gap was illuminated with tungsten light or monochromatic light. Subjects could detect image blur with brief exposure for both types of light. But they could detect depth order only in tungsten light with long exposure, with or without changes in accommodation. [All rights reserved Elsevier]\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The relative role of visual and non-visual cues in determining the perceived direction of `up': experiments in parabolic flight.\n \n \n \n \n\n\n \n Jenkin, H. L., Dyde, R. T., Zacher, J. E., Zikovitz, D. C., Jenkin, M. R., Allison, R., Howard, I. P., & Harris, L. R.\n\n\n \n\n\n\n Acta Astronaut, 56(9-12): 1025-32. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20051025-32,\n\tabstract = {In order to measure the perceived direction of {`}up{'}, subjects judged the three-dimensional shape of disks shaded to be compatible with illumination from particular directions. By finding which shaded disk appeared most convex, we were able to infer the perceived direction of illumination. This provides an indirect measure of the subject's perception of the direction of {`}up{'}. The different cues contributing to this percept were separated by varying the orientation of the subject and the orientation of the visual background relative to gravity. We also measured the effect of decreasing or increasing gravity by making these shape judgements throughout all the phases of parabolic flight (0 g, 2 g and 1 g during level flight). The perceived up direction was modeled by a simple vector sum of {`}up{'} defined by vision, the body and gravity. In this model, the weighting of the visual cue became negligible under microgravity and hypergravity conditions.},\n\tauthor = {Jenkin, H. L. and Dyde, R. T. and Zacher, J. E. and Zikovitz, D. C. and Jenkin, M. R. and Allison, R.S. and Howard, I. P. and Harris, L. R.},\n\tdate-modified = {2012-07-02 19:10:02 -0400},\n\tdoi = {10.1016/j.actaastro.2005.01.030},\n\tjournal = {Acta Astronaut},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {9-12},\n\tpages = {1025-32},\n\ttitle = {The relative role of visual and non-visual cues in determining the perceived direction of {`}up{'}: experiments in parabolic flight},\n\turl-1 = {http://dx.doi.org/10.1016/j.actaastro.2005.01.030},\n\tvolume = {56},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1016/j.actaastro.2005.01.030}}\n\n
\n
\n\n\n
\n In order to measure the perceived direction of `up', subjects judged the three-dimensional shape of disks shaded to be compatible with illumination from particular directions. By finding which shaded disk appeared most convex, we were able to infer the perceived direction of illumination. This provides an indirect measure of the subject's perception of the direction of `up'. The different cues contributing to this percept were separated by varying the orientation of the subject and the orientation of the visual background relative to gravity. We also measured the effect of decreasing or increasing gravity by making these shape judgements throughout all the phases of parabolic flight (0 g, 2 g and 1 g during level flight). The perceived up direction was modeled by a simple vector sum of `up' defined by vision, the body and gravity. In this model, the weighting of the visual cue became negligible under microgravity and hypergravity conditions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The integration of movement on postrotatory nystagmus and illusionary body rotation.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In ICVC Abstracts. International Conference on Visual Coding, volume 95, pages 6. 2005.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zacher:2005ap,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tbooktitle = {ICVC Abstracts. International Conference on Visual Coding},\n\tdate-added = {2011-05-09 11:35:19 -0400},\n\tdate-modified = {2011-05-18 16:20:41 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\tpages = {6},\n\ttitle = {The integration of movement on postrotatory nystagmus and illusionary body rotation},\n\tvolume = {95},\n\tyear = {2005}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Motion aftereffect in depth induced by motion in depth based on binocular cues.\n \n \n \n\n\n \n Sakano, Y., Allison, R., & Howard, I.\n\n\n \n\n\n\n In CVR Vision Conference 2005, Computational Vision in Neural and Machine Systems. Toronto, Canada, 06 2005.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Sakano:2005fd,\n\taddress = {Toronto, Canada},\n\tauthor = {Sakano, Y. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {CVR Vision Conference 2005, Computational Vision in Neural and Machine Systems},\n\tdate-added = {2011-05-09 11:20:50 -0400},\n\tdate-modified = {2011-05-18 16:06:11 -0400},\n\tkeywords = {Motion in depth},\n\tmonth = {06},\n\torganization = {Centre for Vision Research, York University},\n\ttitle = {Motion aftereffect in depth induced by motion in depth based on binocular cues},\n\tyear = {2005}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptions of Illusory Shearing in the Tumbling Room.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Australian Journal of Psychology. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"Perceptions-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2005gp,\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Australian Journal of Psychology},\n\tdate-added = {2011-05-06 16:57:48 -0400},\n\tdate-modified = {2011-05-18 16:08:49 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Perceptions of Illusory Shearing in the Tumbling Room},\n\turl-1 = {http://onlinelibrary.wiley.com/doi/10.1080/00049530600940005/pdf},\n\tyear = {2005}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of motion-defined form in the presence of veiling noise.\n \n \n \n \n\n\n \n Allison, R., Macuda, T., Jennings, S., Thomas, P., Guterman, P., & Craig, G.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 5, pages 651a. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"Detection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:2005lj,\n\tabstract = {Purpose. Perception of motion-defined form from apparent motion depends on the ability to detect and segregate regions of coherent motion. We investigated the effect of superimposed luminance noise on the ability to detect motion-defined form.\n\nMethods. Stimuli consisted of randomly textured squares that subtended 8.6 degrees of visual angle. The image sequences depicted either a motion-defined square subtending 4.3 degrees (the `target') or only the moving background. If the difference in the speed between the foreground dots and the background dots exceeded a certain threshold, the form of the foreground was visible. The images were rendered in Open GL at 100 Hz and displayed at 80\\% contrast.\n\nObservers viewed the displays from 1.2 m with their head stabilized on a chin rest. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus in a two-interval forced-choice procedure.\n\nPoisson distributed spatiotemporal image noise was added to both the background and foreground of the display. At each of a variety of stimulus speeds (20.1, 50.4 100.7, 201.4, and 302.2 arc min/second), we measured detection threshold as a function of stimulus signal to noise ratio.\n\nResults and discussion. All subjects could easily detect the motion-defined forms in the absence of any superimposed noise. As the power of spatiotemporal noise was increased, subjects had increased difficulty detecting the target. The influence of added noise was most pronounced at the lowest and highest image speeds. These results will be discussed in terms of models of motion processing and in the context of the usability of enhanced vision displays under noisy conditions. },\n\tauthor = {Allison, R.S. and Macuda, T. and Jennings, S. and Thomas, P. and Guterman, P. and Craig, G.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2011-05-06 16:51:54 -0400},\n\tdate-modified = {2012-07-02 17:48:13 -0400},\n\tdoi = {10.1167/5.8.651},\n\tkeywords = {Night Vision},\n\tnumber = {8},\n\torganization = {Vision Sciences Society},\n\tpages = {651a},\n\ttitle = {Detection of motion-defined form in the presence of veiling noise},\n\turl-1 = {http://dx.doi.org/10.1167/5.8.651},\n\tvolume = {5},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1167/5.8.651}}\n\n
\n
\n\n\n
\n Purpose. Perception of motion-defined form from apparent motion depends on the ability to detect and segregate regions of coherent motion. We investigated the effect of superimposed luminance noise on the ability to detect motion-defined form. Methods. Stimuli consisted of randomly textured squares that subtended 8.6 degrees of visual angle. The image sequences depicted either a motion-defined square subtending 4.3 degrees (the `target') or only the moving background. If the difference in the speed between the foreground dots and the background dots exceeded a certain threshold, the form of the foreground was visible. The images were rendered in Open GL at 100 Hz and displayed at 80% contrast. Observers viewed the displays from 1.2 m with their head stabilized on a chin rest. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus in a two-interval forced-choice procedure. Poisson distributed spatiotemporal image noise was added to both the background and foreground of the display. At each of a variety of stimulus speeds (20.1, 50.4 100.7, 201.4, and 302.2 arc min/second), we measured detection threshold as a function of stimulus signal to noise ratio. Results and discussion. All subjects could easily detect the motion-defined forms in the absence of any superimposed noise. As the power of spatiotemporal noise was increased, subjects had increased difficulty detecting the target. The influence of added noise was most pronounced at the lowest and highest image speeds. These results will be discussed in terms of models of motion processing and in the context of the usability of enhanced vision displays under noisy conditions. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Aftereffects of motion in depth based on binocular cues.\n \n \n \n \n\n\n \n Sakano, Y., Allison, R. S., & Howard, I. P.\n\n\n \n\n\n\n In Journal of Vision, volume 5, pages 732-732. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"Aftereffects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2005732-732,\n\tabstract = {<U>Purpose.</U> Lateral motion aftereffects (MAEs) have been studied extensively. Less is known about MAEs in depth. We investigated whether adaptation to stimuli moving in depth induces MAEs in depth. <U>Methods.</U> The adaptation stimulus consisted of two frontoparallel planes, depicted by random-element stereograms, one above and one below the fixation point. The two planes repeatedly moved in depth in opposite directions for 2 minutes. The motion-in-depth was specified by interocular velocity differences and/or changing disparity by using the random elements which were spatially and temporally correlated in the two eyes (RDS), those which were spatially uncorrelated but temporally correlated (URDS), or those which were spatially correlated but temporally uncorrelated (DRDS). The test stimulus consisted of a RDS, URDS, DRDS or monocularly viewed random elements that did not move in depth. The subject pressed a key when any apparent motion in depth of the test stimulus ceased. <U>Results and discussion.</U> Under some conditions the test stimulus appeared to move in depth in the direction opposite to that of the adaptation stimulus (negative MAE). Specifically, adaptation to motion-in-depth of RDS and URDS produced MAEs in many test stimuli, while adaptation to DRDS produced little or no MAE in most test stimuli. While further experimentation is required, this finding suggests that adaptation to interocular velocity differences produces substantial MAEs in depth, but that adaptation to changing disparity produces little or no MAE. Also, a monocular test stimulus showed a MAE in a diagonal direction in depth. The depth component of the MAE under monocular test conditions indicates that binocular processes are involved in generating MAEs in depth.},\n\tauthor = {Sakano, Yuichi and Allison, Robert S. and Howard, Ian P.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 18:05:39 -0400},\n\tdoi = {10.1167/5.8.732},\n\tjournal = {Journal of Vision},\n\tkeywords = {Motion in depth},\n\tnumber = {8},\n\tpages = {732-732},\n\ttitle = {Aftereffects of motion in depth based on binocular cues},\n\turl-1 = {http://dx.doi.org/10.1167/5.8.732},\n\tvolume = {5},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1167/5.8.732}}\n\n
\n
\n\n\n
\n Purpose. Lateral motion aftereffects (MAEs) have been studied extensively. Less is known about MAEs in depth. We investigated whether adaptation to stimuli moving in depth induces MAEs in depth. Methods. The adaptation stimulus consisted of two frontoparallel planes, depicted by random-element stereograms, one above and one below the fixation point. The two planes repeatedly moved in depth in opposite directions for 2 minutes. The motion-in-depth was specified by interocular velocity differences and/or changing disparity by using the random elements which were spatially and temporally correlated in the two eyes (RDS), those which were spatially uncorrelated but temporally correlated (URDS), or those which were spatially correlated but temporally uncorrelated (DRDS). The test stimulus consisted of a RDS, URDS, DRDS or monocularly viewed random elements that did not move in depth. The subject pressed a key when any apparent motion in depth of the test stimulus ceased. Results and discussion. Under some conditions the test stimulus appeared to move in depth in the direction opposite to that of the adaptation stimulus (negative MAE). Specifically, adaptation to motion-in-depth of RDS and URDS produced MAEs in many test stimuli, while adaptation to DRDS produced little or no MAE in most test stimuli. While further experimentation is required, this finding suggests that adaptation to interocular velocity differences produces substantial MAEs in depth, but that adaptation to changing disparity produces little or no MAE. Also, a monocular test stimulus showed a MAE in a diagonal direction in depth. The depth component of the MAE under monocular test conditions indicates that binocular processes are involved in generating MAEs in depth.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Physical modeling and characterization of the halo phenomenon in night vision goggles.\n \n \n \n \n\n\n \n Thomas, P. J., Allison, R., Carr, P., Shen, E., Jennings, S., Macuda, T., Craig, G., & Hornsey, R.\n\n\n \n\n\n\n In Rash, C. E., editor(s), Helmet- and Head-Mounted Displays X: Technologies and Applications, volume 5800, of Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE), pages 21-31, Orlando, FL, 2005. SPIE-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"Physical-1\n  \n \n \n \"Physical-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200521-31,\n\tabstract = {When a bright light source is viewed through Night Vision Goggles (NVG), the image of the source can appear enveloped in a {``}halo{''} that is much larger than the {``}weak-signal{''} point spread function of the NVG. The halo phenomenon was investigated in order to produce an accurate model of NVG performance for use in psychophysical experiments. Halos were created and measured under controlled laboratory conditions using representative Generation III NVGs. To quantitatively measure halo characteristics, the NVG eyepiece was replaced by a CMOS imager. Halo size and intensity were determined from camera images as functions of point-source intensity and ambient scene illumination. Halo images were captured over a wide range of source radiances (7 orders of magnitude) and then processed with standard analysis tools to yield spot characteristics. The spot characteristics were analyzed to verify our proposed parametric model of NVG halo event formation. The model considered the potential effects of many subsystems of the NVG in the generation of halo: objective lens, photocathode, image intensifier, fluorescent screen and image guide. A description of the halo effects and the model parameters are contained in this work, along with a qualitative rationale for some of the parameter choices.},\n\taddress = {Orlando, FL},\n\tauthor = {Thomas, P. J. and Allison, R.S. and Carr, P. and Shen, E. and Jennings, S. and Macuda, T. and Craig, G. and Hornsey, R.},\n\tbooktitle = {Helmet- and Head-Mounted Displays X: Technologies and Applications},\n\tdate-modified = {2012-07-02 22:24:47 -0400},\n\tdoi = {10.1117/12.602736},\n\teditor = {Rash, C. E.},\n\tkeywords = {Night Vision},\n\tpages = {21-31},\n\tpublisher = {SPIE-Int Soc Optical Engineering},\n\tseries = {Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE)},\n\ttitle = {Physical modeling and characterization of the halo phenomenon in night vision goggles},\n\turl-1 = {http://dx.doi.org/10.1117/12.602736},\n\turl-2 = {http://dx.doi.org/10.1117/12.602736},\n\tvolume = {5800},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1117/12.602736}}\n\n
\n
\n\n\n
\n When a bright light source is viewed through Night Vision Goggles (NVG), the image of the source can appear enveloped in a ``halo'' that is much larger than the ``weak-signal'' point spread function of the NVG. The halo phenomenon was investigated in order to produce an accurate model of NVG performance for use in psychophysical experiments. Halos were created and measured under controlled laboratory conditions using representative Generation III NVGs. To quantitatively measure halo characteristics, the NVG eyepiece was replaced by a CMOS imager. Halo size and intensity were determined from camera images as functions of point-source intensity and ambient scene illumination. Halo images were captured over a wide range of source radiances (7 orders of magnitude) and then processed with standard analysis tools to yield spot characteristics. The spot characteristics were analyzed to verify our proposed parametric model of NVG halo event formation. The model considered the potential effects of many subsystems of the NVG in the generation of halo: objective lens, photocathode, image intensifier, fluorescent screen and image guide. A description of the halo effects and the model parameters are contained in this work, along with a qualitative rationale for some of the parameter choices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of motion-defined form using night vision goggles.\n \n \n \n \n\n\n \n Macuda, T., Craig, G., Allison, R., Guterman, P., Thomas, P., & Jennings, S.\n\n\n \n\n\n\n In volume 5800, of Proc. SPIE - Int. Soc. Opt. Eng. (USA), pages 1-8, Orlando, FL, USA, 2005. SPIE-Int. Soc. Opt. Eng\n \n\n\n\n
\n\n\n\n \n \n \"Detection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison20051-8,\n\tabstract = {Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. Previously, we used synthetic Aviator Night Vision Imaging System (ANVIS-9) imagery to demonstrate that the capacity to detect motion-defined form was degraded at low levels of illumination (see Macuda et al., 2004; Thomas et al., 2004). To validate our simulated NVG results, the current study evaluated observer's ability to detect motion-defined form through a real ANVIS-9 system. The image sequences consisted of a target (square) that moved at a different speed than the background, or only depicted the moving background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. Mean illumination and hence image noise level was varied by means of Neutral Density (ND) filters placed in front of the NVG objectives. At each noise level, we tested subjects at a series of target speeds. With both real and simulated NVG imagery, subjects had increased difficulty detecting the target with increased noise levels, at both slower and higher target speeds. These degradations in performance should be considered in operational planning. Further research is necessary to expand our understanding of the impact of NVG-produced noise on visual mechanisms},\n\taddress = {Orlando, FL, USA},\n\tauthor = {Macuda, T. and Craig, G. and Allison, R.S. and Guterman, P. and Thomas, P. and Jennings, S.},\n\tdate-modified = {2012-07-02 22:26:09 -0400},\n\tdoi = {10.1117/12.602590},\n\tkeywords = {Night Vision},\n\tpages = {1-8},\n\tpublisher = {SPIE-Int. Soc. Opt. Eng},\n\tseries = {Proc. SPIE - Int. Soc. Opt. Eng. (USA)},\n\ttitle = {Detection of motion-defined form using night vision goggles},\n\turl-1 = {http://dx.doi.org/10.1117/12.602590},\n\tvolume = {5800},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1117/12.602590}}\n\n
\n
\n\n\n
\n Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. Previously, we used synthetic Aviator Night Vision Imaging System (ANVIS-9) imagery to demonstrate that the capacity to detect motion-defined form was degraded at low levels of illumination (see Macuda et al., 2004; Thomas et al., 2004). To validate our simulated NVG results, the current study evaluated observer's ability to detect motion-defined form through a real ANVIS-9 system. The image sequences consisted of a target (square) that moved at a different speed than the background, or only depicted the moving background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. Mean illumination and hence image noise level was varied by means of Neutral Density (ND) filters placed in front of the NVG objectives. At each noise level, we tested subjects at a series of target speeds. With both real and simulated NVG imagery, subjects had increased difficulty detecting the target with increased noise levels, at both slower and higher target speeds. These degradations in performance should be considered in operational planning. Further research is necessary to expand our understanding of the impact of NVG-produced noise on visual mechanisms\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of three night vision intensification tube technologies on resolution acuity: results from Grating and Hoffman ANV-126 tasks.\n \n \n \n \n\n\n \n Macuda, T., Allison, R., Thomas, P., Truong, L., Tang, D., Craig, G., & Jennings, S.\n\n\n \n\n\n\n In volume 5800, of Proc. SPIE - Int. Soc. Opt. Eng. (USA), pages 32-9, Orlando, FL, USA, 2005. SPIE-Int. Soc. Opt. Eng\n \n\n\n\n
\n\n\n\n \n \n \"Comparison-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200532-9,\n\tabstract = {Several methodologies have been used to determine resolution acuity through Night Vision Goggles. The present study compared NVG acuity estimates derived from the Hoffman ANV-126 and a standard psychophysical grating acuity task. For the grating acuity task, observers were required to discriminate between horizontal and vertical gratings according to a method of constant stimuli. Psychometric functions were generated from the performance data, and acuity thresholds were interpolated at a performance level of 70\\% correct. Acuity estimates were established at three different illumination levels (0.06-5X10<sup>-4</sup> lux) for both procedures. These estimates were then converted to an equivalent Snellen value. The data indicate that grating acuity estimates were consistently better (i.e. lower scores) than acuity measures obtained from the Hoffman ANV-126. Furthermore significant differences in estimated acuity were observed using different tube technologies. In keeping with previous acuity investigations, although the Hoffman ANV-126 provides a rapid operational assessment of tube acuity, it is suggested that more rigorous psychophysical procedures such as the grating task described here be used to assess the real behavioural resolution of tube technologies},\n\taddress = {Orlando, FL, USA},\n\tauthor = {Macuda, T. and Allison, R.S. and Thomas, P. and Truong, L. and Tang, D. and Craig, G. and Jennings, S.},\n\tdate-modified = {2012-07-02 22:25:27 -0400},\n\tdoi = {10.1117/12.602598},\n\tkeywords = {Night Vision},\n\tpages = {32-9},\n\tpublisher = {SPIE-Int. Soc. Opt. Eng},\n\tseries = {Proc. SPIE - Int. Soc. Opt. Eng. (USA)},\n\ttitle = {Comparison of three night vision intensification tube technologies on resolution acuity: results from Grating and Hoffman ANV-126 tasks},\n\turl-1 = {http://dx.doi.org/10.1117/12.602598},\n\tvolume = {5800},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1117/12.602598}}\n\n
\n
\n\n\n
\n Several methodologies have been used to determine resolution acuity through Night Vision Goggles. The present study compared NVG acuity estimates derived from the Hoffman ANV-126 and a standard psychophysical grating acuity task. For the grating acuity task, observers were required to discriminate between horizontal and vertical gratings according to a method of constant stimuli. Psychometric functions were generated from the performance data, and acuity thresholds were interpolated at a performance level of 70% correct. Acuity estimates were established at three different illumination levels (0.06-5X10-4 lux) for both procedures. These estimates were then converted to an equivalent Snellen value. The data indicate that grating acuity estimates were consistently better (i.e. lower scores) than acuity measures obtained from the Hoffman ANV-126. Furthermore significant differences in estimated acuity were observed using different tube technologies. In keeping with previous acuity investigations, although the Hoffman ANV-126 provides a rapid operational assessment of tube acuity, it is suggested that more rigorous psychophysical procedures such as the grating task described here be used to assess the real behavioural resolution of tube technologies\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Light source halos in Night Vision Goggles: Psychophysical assessments.\n \n \n \n \n\n\n \n Craig, G., Macuda, T., Thomas, P., Allison, R., & Jennings, S.\n\n\n \n\n\n\n In volume 5800, of Proceedings of SPIE - The International Society for Optical Engineering, pages 40-44, Orlando, FL, United States, 2005. International Society for Optical Engineering, Bellingham WA, WA 98227-0010\n \n\n\n\n
\n\n\n\n \n \n \"Light-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200540-44,\n\tabstract = {Anecdotal reports by pilots flying with Night Vision Goggles (NVGs) in urban environments suggest that halos produced by bright light sources impact flight performance. The current study developed a methodology to examine the impact of viewing distance on perceived halo size. This was a first step in characterizing the subtle phenomenon of halo. Observers provided absolute size estimates of halos generated by a red LED at several viewing distances. Physical measurements of these halos were also recorded. The results indicated that the perceived halo linear size decreased as viewing distance was decreased. Further, the data showed that halos subtended a constant visual angle on the goggles (1&deg;48', &plusmn;7') irrespective of distance up to 75'. This invariance with distance may impact pilot visual performance. For example, the counterintuitive apparent contraction of halo size with decreasing viewing distance may impact estimates of closure rates and of the spatial layout of light sources in the scene. Preliminary results suggest that halo is a dynamic phenomenon that requires further research to characterize the specific perceptual effects that it might have on pilot performance.},\n\taddress = {Orlando, FL, United States},\n\tauthor = {Craig, Greg and Macuda, Todd and Thomas, Paul and Allison, Rob and Jennings, Sion},\n\tdate-modified = {2011-09-12 22:10:58 -0400},\n\tdoi = {10.1117/12.602543},\n\tkeywords = {Night Vision},\n\tpages = {40-44},\n\tpublisher = {International Society for Optical Engineering, Bellingham WA, WA 98227-0010},\n\tseries = {Proceedings of SPIE - The International Society for Optical Engineering},\n\ttitle = {Light source halos in Night Vision Goggles: Psychophysical assessments},\n\turl-1 = {http://dx.doi.org/10.1117/12.602543},\n\tvolume = {5800},\n\tyear = {2005},\n\turl-1 = {https://doi.org/10.1117/12.602543}}\n\n
\n
\n\n\n
\n Anecdotal reports by pilots flying with Night Vision Goggles (NVGs) in urban environments suggest that halos produced by bright light sources impact flight performance. The current study developed a methodology to examine the impact of viewing distance on perceived halo size. This was a first step in characterizing the subtle phenomenon of halo. Observers provided absolute size estimates of halos generated by a red LED at several viewing distances. Physical measurements of these halos were also recorded. The results indicated that the perceived halo linear size decreased as viewing distance was decreased. Further, the data showed that halos subtended a constant visual angle on the goggles (1°48', ±7') irrespective of distance up to 75'. This invariance with distance may impact pilot visual performance. For example, the counterintuitive apparent contraction of halo size with decreasing viewing distance may impact estimates of closure rates and of the spatial layout of light sources in the scene. Preliminary results suggest that halo is a dynamic phenomenon that requires further research to characterize the specific perceptual effects that it might have on pilot performance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2004\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The stimulus integration area for horizontal vergence.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., & Fang, X. P.\n\n\n \n\n\n\n Experimental Brain Research, 156(3): 305-313. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2004305-313,\n\tabstract = {Over what region of space are horizontal disparities integrated to form the stimulus for vergence? The vergence system might be expected to respond to disparities within a small area of interest to bring them into the range of precise stereoscopic processing. However, the literature suggests that disparities are integrated over a fairly large parafoveal area. We report the results of six experiments designed to explore the spatial characteristics of the stimulus for vergence. Binocular eye movements were recorded using magnetic search coils. Each dichoptic display consisted of a central target stimulus that the subject attempted to fuse, and a competing stimulus with conflicting disparity. In some conditions the target was stationary, providing a fixation stimulus. In other conditions, the disparity of the target changed to provide a vergence-tracking stimulus. The target and competing stimulus were combined in a variety of conditions including those in which (1) a transparent textured-disc target was superimposed on a competing textured background, (2) a textured-disc target filled the centre of a competing annular background, and (3) a small target was presented within the centre of a competing annular background of various inner diameters. In some conditions the target and competing stimulus were separated in stereoscopic depth. The results are consistent with a disparity integration area with a diameter of about 5degrees. Stimuli beyond this integration area can drive vergence in their own right, but they do not appear to be summed or averaged with a central stimulus to form a combined disparity signal. A competing stimulus had less effect on vergence when separated from the target by a disparity pedestal. As a result, we propose that it may be more useful to think in terms of an integration volume for vergence rather than a two-dimensional retinal integration area.},\n\tauthor = {Allison, R.S. and Howard, I. P. and Fang, X. P.},\n\tdate-modified = {2012-07-02 19:10:51 -0400},\n\tdoi = {10.1007/s00221-003-1790-0},\n\tjournal = {Experimental Brain Research},\n\tkeywords = {Stereopsis},\n\tnumber = {3},\n\tpages = {305-313},\n\ttitle = {The stimulus integration area for horizontal vergence},\n\turl-1 = {http://dx.doi.org/10.1007/s00221-003-1790-0},\n\tvolume = {156},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1007/s00221-003-1790-0}}\n\n
\n
\n\n\n
\n Over what region of space are horizontal disparities integrated to form the stimulus for vergence? The vergence system might be expected to respond to disparities within a small area of interest to bring them into the range of precise stereoscopic processing. However, the literature suggests that disparities are integrated over a fairly large parafoveal area. We report the results of six experiments designed to explore the spatial characteristics of the stimulus for vergence. Binocular eye movements were recorded using magnetic search coils. Each dichoptic display consisted of a central target stimulus that the subject attempted to fuse, and a competing stimulus with conflicting disparity. In some conditions the target was stationary, providing a fixation stimulus. In other conditions, the disparity of the target changed to provide a vergence-tracking stimulus. The target and competing stimulus were combined in a variety of conditions including those in which (1) a transparent textured-disc target was superimposed on a competing textured background, (2) a textured-disc target filled the centre of a competing annular background, and (3) a small target was presented within the centre of a competing annular background of various inner diameters. In some conditions the target and competing stimulus were separated in stereoscopic depth. The results are consistent with a disparity integration area with a diameter of about 5degrees. Stimuli beyond this integration area can drive vergence in their own right, but they do not appear to be summed or averaged with a central stimulus to form a combined disparity signal. A competing stimulus had less effect on vergence when separated from the target by a disparity pedestal. As a result, we propose that it may be more useful to think in terms of an integration volume for vergence rather than a two-dimensional retinal integration area.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The contribution of image blur to depth perception.\n \n \n \n \n\n\n \n Nguyen, V. A., Howard, I. P., & Allison, R.\n\n\n \n\n\n\n In Vision Sciences Society Annual Meeting, Journal of Vision, volume 4, pages 461a. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Nguyen:2004qw,\n\tabstract = {Purpose: An object nearer or further than the fixation point produces a blurred image. The magnitude and direction (sign) of image blur guides accommodation but it could also provide information about relative depth. We examined the contributions of active accommodation and stationary image blur to the precision of monocular judgments of relative depth. Methods: The test stimulus consisted of two vertical sharp edges presented to the right eye. The right edge was set at each of eleven distances, in front or behind the fixed left edge. The lateral distance between the edges was jittered about a mean value of 4 mm to minimise parallactic depth cue. Subjects prefixated coplanar edges at the fixed edge's distance. The test was presented replacing the fixation stimulus. In one condition, the test remained until the subject responded. In a second condition, the test appeared for only 0.2 sec, too short a period for accommodation. Subjects made forced-choice judgements of the depth order of the edges. Depth acuity thresholds were obtained by the method of constant stimuli. Results: The proportion of correct responses was plotted against the depth between the two edges. For long exposures, mean depth acuity (80\\% correct responses) was 4 cm at a viewing distance of 37 cm. The threshold was higher in the short-duration condition. Conclusion: Monocular depth acuity from image blur was better than previously reported. Reasonably precise relative depth judgments were obtained by actively focussing between stimuli at different distances. Judgments were less precise when based on instantaneous blur. },\n\tauthor = {Nguyen, V. A. and Howard, I. P. and Allison, R.S.},\n\tbooktitle = {Vision Sciences Society Annual Meeting, Journal of Vision},\n\tdate-added = {2011-05-06 17:00:03 -0400},\n\tdate-modified = {2012-07-02 18:00:35 -0400},\n\tdoi = {10.1167/4.8.461},\n\tkeywords = {Depth perception},\n\tnumber = {8},\n\torganization = {Vision Sciences Society},\n\tpages = {461a},\n\ttitle = {The contribution of image blur to depth perception},\n\turl-1 = {http://dx.doi.org/10.1167/4.8.461},\n\tvolume = {4},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1167/4.8.461}}\n\n
\n
\n\n\n
\n Purpose: An object nearer or further than the fixation point produces a blurred image. The magnitude and direction (sign) of image blur guides accommodation but it could also provide information about relative depth. We examined the contributions of active accommodation and stationary image blur to the precision of monocular judgments of relative depth. Methods: The test stimulus consisted of two vertical sharp edges presented to the right eye. The right edge was set at each of eleven distances, in front or behind the fixed left edge. The lateral distance between the edges was jittered about a mean value of 4 mm to minimise parallactic depth cue. Subjects prefixated coplanar edges at the fixed edge's distance. The test was presented replacing the fixation stimulus. In one condition, the test remained until the subject responded. In a second condition, the test appeared for only 0.2 sec, too short a period for accommodation. Subjects made forced-choice judgements of the depth order of the edges. Depth acuity thresholds were obtained by the method of constant stimuli. Results: The proportion of correct responses was plotted against the depth between the two edges. For long exposures, mean depth acuity (80% correct responses) was 4 cm at a viewing distance of 37 cm. The threshold was higher in the short-duration condition. Conclusion: Monocular depth acuity from image blur was better than previously reported. Reasonably precise relative depth judgments were obtained by actively focussing between stimuli at different distances. Judgments were less precise when based on instantaneous blur. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Object blur and monocular depth perception.\n \n \n \n\n\n \n Nguyen, V., Howard, I., & Allison, R.\n\n\n \n\n\n\n In Program Society for Neuroscience Annual Meeting, No. 865.17. 2004 Abstract Viewer/Itinerary Planner. Washington, DC, 2004.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Nguyen:2004zp,\n\taddress = {Washington, DC},\n\tauthor = {Nguyen, V. and Howard, I.P and Allison, R.S.},\n\tbooktitle = {Program Society for Neuroscience Annual Meeting, No. 865.17. 2004 Abstract Viewer/Itinerary Planner},\n\tdate-added = {2011-05-06 16:58:33 -0400},\n\tdate-modified = {2011-05-18 16:08:18 -0400},\n\tkeywords = {Depth perception},\n\torganization = {Society for Neuroscience},\n\ttitle = {Object blur and monocular depth perception},\n\tyear = {2004}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of depth order from chromatic aberration of defocused images.\n \n \n \n \n\n\n \n Howard, I. P., Nguyen, V. A., & Allison, R. S.\n\n\n \n\n\n\n In Journal of Vision, volume 4, pages 57-57. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"Detection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison200457-57,\n\tabstract = {The sign of an accommodative response is provided by differences in chromatic aberration between under- and over-accommodated images. We asked whether these differences enable people to judge the depth order of two stimuli in the absence of other depth cues. Two vertical test edges separated laterally by an illuminated gap were presented to one eye with one edge at random distances relative to the other fixed edge. The fixed edge was at the same distance as two coplanar pre-fixation edges. In one condition, exposure duration was brief so that accommodation could not change. In other conditions exposure was prolonged and subjects either continued to fixate the fixed edge or changed their accommodation between the two test edges. The gap was illuminated with tungsten light or monochromatic light. Subjects could detect image blur of about 0.3 D with brief exposure for both types of light. However, they could detect depth order only in tungsten light with long exposure, with or without changes in accommodation.},\n\tauthor = {Howard, Ian P. and Nguyen, Vincent A. and Allison, Robert S.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:58:50 -0400},\n\tdoi = {10.1167/4.11.57},\n\tjournal = {Journal of Vision},\n\tkeywords = {Depth perception},\n\tnumber = {11},\n\tpages = {57-57},\n\ttitle = {Detection of depth order from chromatic aberration of defocused images},\n\turl-1 = {http://dx.doi.org/10.1167/4.11.57},\n\tvolume = {4},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1167/4.11.57}}\n\n
\n
\n\n\n
\n The sign of an accommodative response is provided by differences in chromatic aberration between under- and over-accommodated images. We asked whether these differences enable people to judge the depth order of two stimuli in the absence of other depth cues. Two vertical test edges separated laterally by an illuminated gap were presented to one eye with one edge at random distances relative to the other fixed edge. The fixed edge was at the same distance as two coplanar pre-fixation edges. In one condition, exposure duration was brief so that accommodation could not change. In other conditions exposure was prolonged and subjects either continued to fixate the fixed edge or changed their accommodation between the two test edges. The gap was illuminated with tungsten light or monochromatic light. Subjects could detect image blur of about 0.3 D with brief exposure for both types of light. However, they could detect depth order only in tungsten light with long exposure, with or without changes in accommodation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Saccadic suppression of motion of the entire visual field.\n \n \n \n \n\n\n \n Allison, R., Schumacher, J., & Herpers, R.\n\n\n \n\n\n\n In Perception, volume 33, pages 146-146. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"Saccadic-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2004146-146,\n\tauthor = {Allison, R.S. and Schumacher, J. and Herpers, R.},\n\tbooktitle = {Perception},\n\tdate-modified = {2011-09-12 21:58:56 -0400},\n\tjournal = {Perception},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {146-146},\n\ttitle = {Saccadic suppression of motion of the entire visual field},\n\turl-1 = {http://www.perceptionweb.com/abstract.cgi?id=v040564},\n\tvolume = {33},\n\tyear = {2004}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (10)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n New Simple Virtual Walking Method -Walking on the Spot.\n \n \n \n \n\n\n \n Yan, L., Allison, R., & Rushton, S.\n\n\n \n\n\n\n In 8th Annual Immersive Projection Technology (IPT) Symposium, Ames, Iowa, May 13th-14th 2004. \n \n\n\n\n
\n\n\n\n \n \n \"New-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Yan:2004ti,\n\tabstract = {In CAVE-like environments human locomotion is significantly restricted due to physical space and configural constraints. Interaction techniques based upon stepping in place have been suggested as a way to simulate long range locomotion. We describe a new method for step detection and estimation of forward walking speed and direction in an immersive virtual environment. To calibrate our system and to help in the modeling of the stepping behaviour, we collected motion capture data during real locomotion down a hallway while walking at different freely selected speeds, from very slow to very fast. From this data, the empirical relation between the forward speed of real walking and the trajectory of the leg motion during stepping was established. A simple model of stepping motion was fit for individual subjects. The model was used to estimate forward walking speed and direction from step characteristics during walking in place in a six-walled virtual environment. The system provides natural and effective simulated gait for interaction and travel within the virtual environment and provides the ability to study human locomotion and navigation in a CAVE-like environment.},\n\taddress = {Ames, Iowa},\n\tauthor = {Yan, L. and Allison, R.S. and Rushton, S.K.},\n\tbooktitle = {8th Annual Immersive Projection Technology (IPT) Symposium},\n\tdate-added = {2011-05-06 13:30:27 -0400},\n\tdate-modified = {2011-05-18 16:07:33 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = {May 13th-14th},\n\tread = {1},\n\ttitle = {New Simple Virtual Walking Method -Walking on the Spot},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/New_Simple_Virtual_Walking_Method.pdf},\n\tyear = {2004}}\n\n
\n
\n\n\n
\n In CAVE-like environments human locomotion is significantly restricted due to physical space and configural constraints. Interaction techniques based upon stepping in place have been suggested as a way to simulate long range locomotion. We describe a new method for step detection and estimation of forward walking speed and direction in an immersive virtual environment. To calibrate our system and to help in the modeling of the stepping behaviour, we collected motion capture data during real locomotion down a hallway while walking at different freely selected speeds, from very slow to very fast. From this data, the empirical relation between the forward speed of real walking and the trajectory of the leg motion during stepping was established. A simple model of stepping motion was fit for individual subjects. The model was used to estimate forward walking speed and direction from step characteristics during walking in place in a six-walled virtual environment. The system provides natural and effective simulated gait for interaction and travel within the virtual environment and provides the ability to study human locomotion and navigation in a CAVE-like environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combined Head - Eye Tracking for Immersive Virtual Reality.\n \n \n \n \n\n\n \n Huang, H., R.S., A., & Jenkin, M.\n\n\n \n\n\n\n In ICAT'2004 14th International Conference on Artificial Reality and Telexistance, Seoul, Korea, November 30th- December 2nd 2004. \n \n\n\n\n
\n\n\n\n \n \n \"Combined-1\n  \n \n \n \"Combined-2\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Huang:2004kk,\n\tabstract = {Real-time gaze tracking is a promising interaction technique for virtual environments. Immersive projection-based virtual reality systems such as the\nCAVETM allow users a wide range of natural movements. Unfortunately, most head and eye movement measurement techniques are of limited use during free\nhead and body motion. An improved head-eye tracking system is proposed and developed for use in immersive applications with free head motion. The system is based upon a head-mounted video-based eye tracking system and a hybrid ultrasound-inertial head tracking system. The system can measure the point of regard in a scene in real-time during relatively large head movements. The\nsystem will serve as a flexible testbed for evaluating novel gaze-contingent interaction techniques in virtual environments. The calibration of the head-eye tracking system is one of the most important issues that need to be addressed. In\nthis paper, a simple view-based calibration method is proposed.},\n\taddress = {Seoul, Korea},\n\tauthor = {Huang, H. and Allison R.S. and Jenkin, M.R.M},\n\tbooktitle = {ICAT'2004 14th International Conference on Artificial Reality and Telexistance},\n\tdate-added = {2011-05-06 13:26:18 -0400},\n\tdate-modified = {2011-05-18 15:45:14 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {November 30th- December 2nd},\n\ttitle = {Combined Head - Eye Tracking for Immersive Virtual Reality},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/icat2004.pdf},\n\turl-2 = {http://www.vrsj.org/ic-at/https://percept.eecs.yorku.ca/papers/2004/S3-3.pdf},\n\tyear = {2004}}\n\n
\n
\n\n\n
\n Real-time gaze tracking is a promising interaction technique for virtual environments. Immersive projection-based virtual reality systems such as the CAVETM allow users a wide range of natural movements. Unfortunately, most head and eye movement measurement techniques are of limited use during free head and body motion. An improved head-eye tracking system is proposed and developed for use in immersive applications with free head motion. The system is based upon a head-mounted video-based eye tracking system and a hybrid ultrasound-inertial head tracking system. The system can measure the point of regard in a scene in real-time during relatively large head movements. The system will serve as a flexible testbed for evaluating novel gaze-contingent interaction techniques in virtual environments. The calibration of the head-eye tracking system is one of the most important issues that need to be addressed. In this paper, a simple view-based calibration method is proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Validation of synthetic imagery for night vision devices.\n \n \n \n \n\n\n \n Thomas, P. J., Allison, R., Jennings, S., Yip, K., Savchenko, E., Tsang, I., Macuda, T., & Hornsey, R.\n\n\n \n\n\n\n In Rash, C. E., & Reese, C. E., editor(s), Helmet and Head-Mounted Displays IX: Technologies and Applications, volume 5442, of Proc. SPIE - Int. Soc. Opt. Eng. (USA), pages 25-35, Orlando, FL, USA, 2004. SPIE-Int. Soc. Opt. Eng\n \n\n\n\n
\n\n\n\n \n \n \"Validation-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200425-35,\n\tabstract = {Night vision devices are important tools that extend the operational capability of military and civilian flight operations. Although these devices enhance some aspects of night vision, they distort or degrade other aspects. Scintillation of the NVG signal at low light levels is one of the parameters that may affect pilot performance. We have developed a parametric model of NVG image scintillation. Measurements were taken of the output of a representative NVG at low light levels to validate the model and refine the values of the embedded parameters. A simple test environment was created using a photomultiplier and an oscilloscope. The model was used to create sequences of simulated NVG imagery that were characterized numerically and compared with measured NVG signals. The sequences of imagery are intended for use in laboratory experiments on depth and motion-in-depth perception},\n\taddress = {Orlando, FL, USA},\n\tauthor = {Thomas, P. J. and Allison, R.S. and Jennings, S. and Yip, K. and Savchenko, E. and Tsang, I. and Macuda, T. and Hornsey, R.},\n\tbooktitle = {Helmet and Head-Mounted Displays IX: Technologies and Applications},\n\tdate-modified = {2012-07-02 22:06:44 -0400},\n\tdoi = {10.1117/12.542618},\n\teditor = {Rash, C. E. and Reese, C. E.},\n\tkeywords = {Night Vision},\n\tpages = {25-35},\n\tpublisher = {SPIE-Int. Soc. Opt. Eng},\n\tseries = {Proc. SPIE - Int. Soc. Opt. Eng. (USA)},\n\ttitle = {Validation of synthetic imagery for night vision devices},\n\turl-1 = {http://dx.doi.org/10.1117/12.542618},\n\tvolume = {5442},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1117/12.542618}}\n\n
\n
\n\n\n
\n Night vision devices are important tools that extend the operational capability of military and civilian flight operations. Although these devices enhance some aspects of night vision, they distort or degrade other aspects. Scintillation of the NVG signal at low light levels is one of the parameters that may affect pilot performance. We have developed a parametric model of NVG image scintillation. Measurements were taken of the output of a representative NVG at low light levels to validate the model and refine the values of the embedded parameters. A simple test environment was created using a photomultiplier and an oscilloscope. The model was used to create sequences of simulated NVG imagery that were characterized numerically and compared with measured NVG signals. The sequences of imagery are intended for use in laboratory experiments on depth and motion-in-depth perception\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Concept for image intensifier with CMOS imager output interface.\n \n \n \n \n\n\n \n Thomas, P. J., Allison, R., Hornsey, R., & Wong, W.\n\n\n \n\n\n\n In Armitage, J. C., Lessard, R. A., & Lampropoulos, G. A., editor(s), Photonics North: Applications of Photonic Technology 7b, Pts 1 and 2 - Closing the Gap between Theory, Development, and Application - Photonic Applications in Astronomy, Biomedicine, Imaging, Materials Processing, and Education, volume 5578, of Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie), pages 353-364, Ottawa, Canada, 2004. Spie-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"Concept-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2004353-364,\n\tabstract = {A concept is described for the detection and location of transient objects, in which a {``}pixel-binary{''} CMOS imager is used to give a very high effective frame rate for the imager. The sensitivity to incoming photons is enhanced by the use of an image intensifier in front of the imager. For faint signals and a high enough frame rate, a single {``}image{''} typically contains only a few photon or noise events. Only the event locations need be stored, rather than the full image. The processing of many such {``}fast frames{''} allows a composite image to be created. In the composite image, isolated noise events can be removed, photon shot noise effects can be spatially smoothed and moving objects can be de-blurred and assigned a velocity vector. Expected objects can be masked or removed by differencing methods. In this work, the concept of a combined image intensifier/CMOS imager is modeled. Sensitivity, location precision and other performance factors are assessed. Benchmark measurements are used to validate aspects of the model. Options for a custom CMOS imager design concept are identified within the context of the benefits and drawbacks of commercially available night vision devices and CMOS imagers.},\n\taddress = {Ottawa, Canada},\n\tauthor = {Thomas, P. J. and Allison, R.S. and Hornsey, R. and Wong, W.},\n\tbooktitle = {Photonics North: Applications of Photonic Technology 7b, Pts 1 and 2 - Closing the Gap between Theory, Development, and Application - Photonic Applications in Astronomy, Biomedicine, Imaging, Materials Processing, and Education},\n\tdate-modified = {2012-07-02 22:29:10 -0400},\n\tdoi = {10.1117/12.567712},\n\teditor = {Armitage, J. C. and Lessard, R. A. and Lampropoulos, G. A.},\n\tkeywords = {Night Vision},\n\tpages = {353-364},\n\tpublisher = {Spie-Int Soc Optical Engineering},\n\tseries = {Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie)},\n\ttitle = {Concept for image intensifier with CMOS imager output interface},\n\turl-1 = {http://dx.doi.org/10.1117/12.567712},\n\tvolume = {5578},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1117/12.567712}}\n\n
\n
\n\n\n
\n A concept is described for the detection and location of transient objects, in which a ``pixel-binary'' CMOS imager is used to give a very high effective frame rate for the imager. The sensitivity to incoming photons is enhanced by the use of an image intensifier in front of the imager. For faint signals and a high enough frame rate, a single ``image'' typically contains only a few photon or noise events. Only the event locations need be stored, rather than the full image. The processing of many such ``fast frames'' allows a composite image to be created. In the composite image, isolated noise events can be removed, photon shot noise effects can be spatially smoothed and moving objects can be de-blurred and assigned a velocity vector. Expected objects can be masked or removed by differencing methods. In this work, the concept of a combined image intensifier/CMOS imager is modeled. Sensitivity, location precision and other performance factors are assessed. Benchmark measurements are used to validate aspects of the model. Options for a custom CMOS imager design concept are identified within the context of the benefits and drawbacks of commercially available night vision devices and CMOS imagers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Using saccadic suppression to hide graphic updates.\n \n \n \n \n\n\n \n Schumacher, J., Allison, R., & Herpers, R.\n\n\n \n\n\n\n In Eurographic/ACM SIGGRAPH Symposium on Virtual Environments, pages 17-24, Grenoble, France, 2004. \n \n\n\n\n
\n\n\n\n \n \n \"Using-1\n  \n \n \n \"Using-2\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200417-24,\n\taddress = {Grenoble, France},\n\tauthor = {Schumacher, J. and Allison, R.S. and Herpers, R.},\n\tbooktitle = {Eurographic/{ACM} {SIGGRAPH} Symposium on Virtual Environments},\n\tdate-modified = {2011-05-11 13:32:42 -0400},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {17-24},\n\ttitle = {Using saccadic suppression to hide graphic updates},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Schumacher-Using_Saccadic_Suppression.pdf},\n\turl-2 = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.3306},\n\tyear = {2004}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of motion-defined form under simulated night vision conditions.\n \n \n \n \n\n\n \n Macuda, T., Allison, R. S., Thomas, P., Craig, G., & Jennings, S.\n\n\n \n\n\n\n In volume 5442, of Proceedings of SPIE - The International Society for Optical Engineering, pages 36-44, Orlando, FL, United States, 2004. International Society for Optical Engineering, Bellingham, WA\n \n\n\n\n
\n\n\n\n \n \n \"Detection-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200436-44,\n\tabstract = {The influence of Night Vision Goggle-produced noise on the perception of motion-defined form was investigated using synthetic imagery and standard psychophysical procedures. Synthetic image sequences incorporating synthetic noise were generated using a software model developed by our research group. This model is based on the physical properties of the Aviator Night Vision Imaging System (ANVIS-9) image intensification tube. The image sequences either depicted a target that moved at a different speed than the background, or only depicted the background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. We tested subjects at a series of target speeds at several realistic noise levels resulting from varying simulated illumination. The results showed that subjects had increased difficulty detecting the target with increased noise levels, particularly at slower target speeds. This study suggests that the capacity to detect motion-defined form is degraded at low levels of illumination. Our findings are consistent with anecdotal reports of impaired motion perception in NVGs. Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. These degradations in performance should be considered in operational planning.},\n\taddress = {Orlando, FL, United States},\n\tauthor = {Macuda, Todd and Allison, Robert S. and Thomas, Paul and Craig, Greg and Jennings, Sion},\n\tdate-modified = {2012-07-02 22:28:38 -0400},\n\tdoi = {10.1117/12.542633},\n\tkeywords = {Night Vision},\n\tpages = {36-44},\n\tpublisher = {International Society for Optical Engineering, Bellingham, WA},\n\tseries = {Proceedings of SPIE - The International Society for Optical Engineering},\n\ttitle = {Detection of motion-defined form under simulated night vision conditions},\n\turl-1 = {http://dx.doi.org/10.1117/12.542633},\n\tvolume = {5442},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1117/12.542633}}\n\n
\n
\n\n\n
\n The influence of Night Vision Goggle-produced noise on the perception of motion-defined form was investigated using synthetic imagery and standard psychophysical procedures. Synthetic image sequences incorporating synthetic noise were generated using a software model developed by our research group. This model is based on the physical properties of the Aviator Night Vision Imaging System (ANVIS-9) image intensification tube. The image sequences either depicted a target that moved at a different speed than the background, or only depicted the background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. We tested subjects at a series of target speeds at several realistic noise levels resulting from varying simulated illumination. The results showed that subjects had increased difficulty detecting the target with increased noise levels, particularly at slower target speeds. This study suggests that the capacity to detect motion-defined form is degraded at low levels of illumination. Our findings are consistent with anecdotal reports of impaired motion perception in NVGs. Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. These degradations in performance should be considered in operational planning.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An optical-inertial tracking system for fully-enclosed VR displays.\n \n \n \n \n\n\n \n Hogue, A., Jenkin, M. R., & Allison, R.\n\n\n \n\n\n\n In 1st Canadian Conference on Computer and Robot Vision, Proceedings, pages 22-29, London, ON, 2004. Ieee Computer Soc,Los Alamitos\n \n\n\n\n
\n\n\n\n \n \n \"An-1\n  \n \n \n \"An-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200422-29,\n\tabstract = {This paper describes a hybrid optical-inertial tracking technology for fully-immersive projective displays. In order to track the operator, the operator wears a 3DOF commercial inertial tracking system coupled with a set of laser diodes arranged in a known configuration. The projection of this laser constellation on the display walls are tracked visually to compute the 6DOF absolute head pose of the user. The absolute pose is combined with the inertial tracker data using an extended Kalman filter to maintain a robust estimate of position and orientation. This paper describes the basic tracking system including the hardware and software infrastructure.},\n\taddress = {London, ON},\n\tauthor = {Hogue, A. and Jenkin, M. R. and Allison, R.S.},\n\tbooktitle = {1st Canadian Conference on Computer and Robot Vision, Proceedings},\n\tdate-modified = {2011-05-11 13:26:13 -0400},\n\tdoi = {10.1109/CCCRV.2004.1301417},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {22-29},\n\tpublisher = {Ieee Computer Soc,Los Alamitos},\n\ttitle = {An optical-inertial tracking system for fully-enclosed VR displays},\n\turl-1 = {http://dx.doi.org/10.1109/CCCRV.2004.1301417},\n\turl-2 = {http://dx.doi.org/10.1109/CCCRV.2004.1301417},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1109/CCCRV.2004.1301417}}\n\n
\n
\n\n\n
\n This paper describes a hybrid optical-inertial tracking technology for fully-immersive projective displays. In order to track the operator, the operator wears a 3DOF commercial inertial tracking system coupled with a set of laser diodes arranged in a known configuration. The projection of this laser constellation on the display walls are tracked visually to compute the 6DOF absolute head pose of the user. The absolute pose is combined with the inertial tracker data using an extended Kalman filter to maintain a robust estimate of position and orientation. This paper describes the basic tracking system including the hardware and software infrastructure.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hiding graphic updates during long saccadic suppression periods.\n \n \n \n \n\n\n \n Herpers, R., Schumacher, J., & Allison, R.\n\n\n \n\n\n\n In Ilg, U., editor(s), Dynamische Perception 2004, pages 77-82, 2004. Infix- IOS Press BV Amsterdam\n \n\n\n\n
\n\n\n\n \n \n \"Hiding-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison200477-82,\n\tabstract = {Abstract. During visual exploration of a scene human beings can be insensitive\nto even large changes in the scene when the eye is executing rapid or saccadic\neye movements. In this contribution, this period of saccadic suppression was\nexploited to hide graphics updates in immersive environments. Two experiments\nwere conducted. In the first experiment the general sensitivity of observers to trans -saccadic translations of large images of complex natural scenes was studied. It was found that trans -saccadic changes of up to 1.2 degrees of visual\nangle were seldom noticed during saccades with duration of at least 66 ms. In the second experiment, the perceived magnitude of trans -saccadic translation\nwas compared to the perceived magnitude of image translation when no saccade was performed to determine the point of subjective equality. It was found that trans-saccadic displacements were perceived as approximately half as big as equivalent sized inter-saccadic displacements.},\n\tauthor = {Herpers, R. and Schumacher, J. and Allison, R.S.},\n\tbooktitle = {Dynamische Perception 2004},\n\tdate-modified = {2011-05-11 13:32:43 -0400},\n\teditor = {Ilg, U.J.},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {77-82},\n\tpublisher = {Infix- IOS Press BV Amsterdam},\n\ttitle = {Hiding graphic updates during long saccadic suppression periods},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Herpers-Hiding-graphic-ups.pdf},\n\tyear = {2004}}\n\n
\n
\n\n\n
\n Abstract. During visual exploration of a scene human beings can be insensitive to even large changes in the scene when the eye is executing rapid or saccadic eye movements. In this contribution, this period of saccadic suppression was exploited to hide graphics updates in immersive environments. Two experiments were conducted. In the first experiment the general sensitivity of observers to trans -saccadic translations of large images of complex natural scenes was studied. It was found that trans -saccadic changes of up to 1.2 degrees of visual angle were seldom noticed during saccades with duration of at least 66 ms. In the second experiment, the perceived magnitude of trans -saccadic translation was compared to the perceived magnitude of image translation when no saccade was performed to determine the point of subjective equality. It was found that trans-saccadic displacements were perceived as approximately half as big as equivalent sized inter-saccadic displacements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of network delay on a collaborative motor task with telehaptic and televisual feedback.\n \n \n \n \n\n\n \n Allison, R. S., Zacher, J. E., Wang, D., & Shu, J.\n\n\n \n\n\n\n In of Proceedings VRCAI 2004 - ACM SIGGRAPH International Conference on Virtual Reality Continuum and its Applications in Industry, pages 375-381, New York, NY 10036-5701, United States, 2004. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2004375-381,\n\tabstract = {The incorporation of haptic interfaces into collaborative virtual environments is challenging when the users are geographically distributed. Reduction of latency is essential for maintaining realism, causality and the sense of co-presence in collaborative virtual environments during closely-coupled haptic tasks. In this study we consider the effects of varying amounts of simulated constant delay on the performance of a simple collaborative haptic task. The task was performed with haptic feedback alone or with visual feedback alone. Subjects were required to make a coordinated movement of their haptic displays as rapidly as possible, while maintaining a target simulated spring force between their end effector and that of their collaborator. Increasing simulated delay resulted in a decrease in performance, either in deviation from target spring force and in increased time to complete the task. At large latencies, there was evidence of dissociation between the states of the system that was observed by each of the collaborating users. This confirms earlier anecdotal evidence that users can be essentially seeing qualitatively different simulations with typical long distance network delays.},\n\taddress = {New York, NY 10036-5701, United States},\n\tauthor = {Allison, Robert S. and Zacher, James E. and Wang, David and Shu, Joseph},\n\tdate-modified = {2012-01-19 19:52:56 -0500},\n\tdoi = {10.1145/1044588.1044670},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {375-381},\n\tpublisher = {Association for Computing Machinery},\n\tseries = {Proceedings {VRCAI} 2004 - {ACM} {SIGGRAPH} International Conference on Virtual Reality Continuum and its Applications in Industry},\n\ttitle = {Effects of network delay on a collaborative motor task with telehaptic and televisual feedback},\n\turl-1 = {http://dx.doi.org/10.1145/1044588.1044670},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1145/1044588.1044670}}\n\n
\n
\n\n\n
\n The incorporation of haptic interfaces into collaborative virtual environments is challenging when the users are geographically distributed. Reduction of latency is essential for maintaining realism, causality and the sense of co-presence in collaborative virtual environments during closely-coupled haptic tasks. In this study we consider the effects of varying amounts of simulated constant delay on the performance of a simple collaborative haptic task. The task was performed with haptic feedback alone or with visual feedback alone. Subjects were required to make a coordinated movement of their haptic displays as rapidly as possible, while maintaining a target simulated spring force between their end effector and that of their collaborator. Increasing simulated delay resulted in a decrease in performance, either in deviation from target spring force and in increased time to complete the task. At large latencies, there was evidence of dissociation between the states of the system that was observed by each of the collaborating users. This confirms earlier anecdotal evidence that users can be essentially seeing qualitatively different simulations with typical long distance network delays.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The camera convergence problem revisited.\n \n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n In Woods, A. J., Merritt, J. O., Benton, S. A., & Bolas, M. T., editor(s), Stereoscopic Displays and Virtual Reality Systems XI, volume 5291, of Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie), pages 167-178, Bellingham, 2004. Spie-Int Soc Optical Engineering\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n \n \"The-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2004167-178,\n\tabstract = {Convergence of the real or virtual stereoscopic cameras is an important operation in stereoscopic display systems. For example, convergence can shift the range of portrayed depth to improve visual comfort; can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict, or can bring objects of interest into the binocular field-of-view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or 'toe-in' distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviourally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. We ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images.},\n\taddress = {Bellingham},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {Stereoscopic Displays and Virtual Reality Systems XI},\n\tdate-modified = {2012-07-02 22:31:57 -0400},\n\tdoi = {10.1117/12.526278},\n\teditor = {Woods, A. J. and Merritt, J. O. and Benton, S. A. and Bolas, M. T.},\n\tkeywords = {Stereopsis},\n\tpages = {167-178},\n\tpublisher = {Spie-Int Soc Optical Engineering},\n\tseries = {Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie)},\n\ttitle = {The camera convergence problem revisited},\n\turl-1 = {http://dx.doi.org/10.1117/12.526278},\n\turl-2 = {http://dx.doi.org/10.1117/12.526278},\n\tvolume = {5291},\n\tyear = {2004},\n\turl-1 = {https://doi.org/10.1117/12.526278}}\n\n
\n
\n\n\n
\n Convergence of the real or virtual stereoscopic cameras is an important operation in stereoscopic display systems. For example, convergence can shift the range of portrayed depth to improve visual comfort; can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict, or can bring objects of interest into the binocular field-of-view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or 'toe-in' distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviourally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. We ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2003\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Erratum: Coherent perspective jitter induces visual illusions of self-motion (Perception (2003) 32 (97-110)).\n \n \n \n \n\n\n \n Palmisano, S., Burke, D., & Allison, R.\n\n\n \n\n\n\n Perception, 32(8): 1028. 2003.\n \n\n\n\n
\n\n\n\n \n \n \"Erratum:-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20031028,\n\tauthor = {Palmisano, S. and Burke, D. and Allison, R.S.},\n\tdate-modified = {2011-05-11 13:15:55 -0400},\n\tjournal = {Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {8},\n\tpages = {1028},\n\ttitle = {Erratum: Coherent perspective jitter induces visual illusions of self-motion (Perception (2003) 32 (97-110))},\n\turl-1 = {http://www.ncbi.nlm.nih.gov/pubmed/12613789},\n\tvolume = {32},\n\tyear = {2003}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coherent perspective jitter induces visual illusions of self-motion.\n \n \n \n \n\n\n \n Palmisano, S., Burke, D., & Allison, R.\n\n\n \n\n\n\n Perception, 32(1): 97-110. 2003.\n \n\n\n\n
\n\n\n\n \n \n \"Coherent-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison200397-110,\n\tabstract = {Palmisano et al (2000 Perception 29 57-67) found that adding coherent perspective jitter to constant-velocity radial flow improved visually induced illusions of self-motion (vection). This was a surprising finding, because unlike pure radial flow, this jittering radial flow should have generated sustained visual--vestibular conflicts--previously thought to always reduce/impair vection. We attempted to ascertain the essential stimulus features for this jitter advantage for vection by examining three novel types of jitter display. While adding incoherent jitter to radial flow was found to impair vection, adding coherent non-perspective jitter had little effect on this subjective experience (contrary to the notion that jitter improves vection by reducing adaptation to radial flow). Importantly, we found that coherent perspective jitter not only improves the vection induced by radial flow, but it also appears to induce modest vection by itself (demonstrating that vection can still occur when there is an extreme mismatch between actual and expected vestibular activity). These results suggest that the previously demonstrated advantage for coherent perspective jitter was due (in part at least) to jittering vection combining with forwards vection in depth to produce a more compelling overall vection experience.},\n\tauthor = {Palmisano, S. and Burke, D. and Allison, R.S.},\n\tdate-modified = {2012-07-02 19:13:08 -0400},\n\tdoi = {10.1068/p3468},\n\tjournal = {Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {1},\n\tpages = {97-110},\n\ttitle = {Coherent perspective jitter induces visual illusions of self-motion},\n\turl-1 = {http://dx.doi.org/10.1068/p3468},\n\tvolume = {32},\n\tyear = {2003},\n\turl-1 = {https://doi.org/10.1068/p3468}}\n\n
\n
\n\n\n
\n Palmisano et al (2000 Perception 29 57-67) found that adding coherent perspective jitter to constant-velocity radial flow improved visually induced illusions of self-motion (vection). This was a surprising finding, because unlike pure radial flow, this jittering radial flow should have generated sustained visual–vestibular conflicts–previously thought to always reduce/impair vection. We attempted to ascertain the essential stimulus features for this jitter advantage for vection by examining three novel types of jitter display. While adding incoherent jitter to radial flow was found to impair vection, adding coherent non-perspective jitter had little effect on this subjective experience (contrary to the notion that jitter improves vection by reducing adaptation to radial flow). Importantly, we found that coherent perspective jitter not only improves the vection induced by radial flow, but it also appears to induce modest vection by itself (demonstrating that vection can still occur when there is an extreme mismatch between actual and expected vestibular activity). These results suggest that the previously demonstrated advantage for coherent perspective jitter was due (in part at least) to jittering vection combining with forwards vection in depth to produce a more compelling overall vection experience.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of scleral search coil wear on visual function.\n \n \n \n \n\n\n \n Irving, E. L., Zacher, J. E., Allison, R., & Callender, M. G.\n\n\n \n\n\n\n Invest Ophthalmol Vis Sci, 44(5): 1933-8. 2003.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20031933-8,\n\tabstract = {PURPOSE: The scleral search coil is widely regarded as the gold standard measurement technique for eye movements. The effect of wearing scleral search coils on human vision has not been systematically studied. However, there are anecdotal reports of degraded visual acuity, mild eye irritation, and an increase rise in intraocular pressure (IOP). The current study was conducted to investigate the effect of scleral search coil use on visual acuity and ocular integrity. METHODS: Six subjects were examined; all had previously worn search coils. Two drops of topical anesthetic were administered before insertion of the coils. Coils were inserted by hand and secured by applying mild pressure. The coils were removed after 45 minutes or on request of either the subject or the clinician. Before, during (at 15-minutes intervals), and after the coil was worn, the following measurements were taken for both eyes: tonometry (noncontact), corneal topography, biomicroscopic examination, visual acuity (monocular Snellen), and an eye-discomfort rating. RESULTS: Scleral coils produced a variety of effects, including ocular discomfort, hyperemia of the bulbar conjunctiva, increased IOP, buckling of the iris, grade 2 and 3 corneal staining, and reduction in visual acuity. Effects appeared as early as 15 minutes after insertion of the coils. All observed effects seemed to be transient and dissipated after coils were removed. CONCLUSIONS: Scleral coils may not be appropriate for all subjects. The findings suggest that there is a need for thorough subject prescreening and that clinicians should consider the risk/benefit ratio. Acute reduction in visual acuity may confound search coil findings. More research is needed to determine the maximum wearing time for properly screened subjects.},\n\tauthor = {Irving, E. L. and Zacher, J. E. and Allison, R.S. and Callender, M. G.},\n\tdate-modified = {2012-07-02 19:20:49 -0400},\n\tdoi = {10.1167/iovs.01-0926},\n\tjournal = {Invest Ophthalmol Vis Sci},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {5},\n\tpages = {1933-8},\n\ttitle = {Effects of scleral search coil wear on visual function},\n\turl-1 = {http://dx.doi.org/10.1167/iovs.01-0926},\n\tvolume = {44},\n\tyear = {2003},\n\turl-1 = {https://doi.org/10.1167/iovs.01-0926}}\n\n
\n
\n\n\n
\n PURPOSE: The scleral search coil is widely regarded as the gold standard measurement technique for eye movements. The effect of wearing scleral search coils on human vision has not been systematically studied. However, there are anecdotal reports of degraded visual acuity, mild eye irritation, and an increase rise in intraocular pressure (IOP). The current study was conducted to investigate the effect of scleral search coil use on visual acuity and ocular integrity. METHODS: Six subjects were examined; all had previously worn search coils. Two drops of topical anesthetic were administered before insertion of the coils. Coils were inserted by hand and secured by applying mild pressure. The coils were removed after 45 minutes or on request of either the subject or the clinician. Before, during (at 15-minutes intervals), and after the coil was worn, the following measurements were taken for both eyes: tonometry (noncontact), corneal topography, biomicroscopic examination, visual acuity (monocular Snellen), and an eye-discomfort rating. RESULTS: Scleral coils produced a variety of effects, including ocular discomfort, hyperemia of the bulbar conjunctiva, increased IOP, buckling of the iris, grade 2 and 3 corneal staining, and reduction in visual acuity. Effects appeared as early as 15 minutes after insertion of the coils. All observed effects seemed to be transient and dissipated after coils were removed. CONCLUSIONS: Scleral coils may not be appropriate for all subjects. The findings suggest that there is a need for thorough subject prescreening and that clinicians should consider the risk/benefit ratio. Acute reduction in visual acuity may confound search coil findings. More research is needed to determine the maximum wearing time for properly screened subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Geometric and induced effects in binocular stereopsis and motion parallax.\n \n \n \n \n\n\n \n Allison, R., Rogers, B. J., & Bradshaw, M. F.\n\n\n \n\n\n\n Vision Research, 43(17): 1879-93. 2003.\n \n\n\n\n
\n\n\n\n \n \n \"Geometric-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20031879-93,\n\tabstract = {This paper examines and contrasts motion-parallax analogues of the induced-size and induced-shear effects with the equivalent induced effects from binocular disparity. During lateral head motion or with binocular stereopsis, vertical-shear and vertical-size transformations produced 'induced effects' of apparent inclination and slant that are not predicted geometrically. With vertical head motion, horizontal-shear and horizontal-size transformations produced similar analogues of the disparity induced effects. Typically, the induced effects were opposite in direction and slightly smaller in size than the geometric effects. Local induced-shear and induced-size effects could be elicited from motion parallax, but not from disparity, and were most pronounced when the stimulus contained discontinuities in velocity gradient. The implications of these results are discussed in the context of models of depth perception from disparity and structure from motion},\n\tauthor = {Allison, R.S. and Rogers, B. J. and Bradshaw, M. F.},\n\tdate-modified = {2011-05-10 14:57:34 -0400},\n\tdoi = {10.1016/S0042-6989(03)00298-0},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {17},\n\tpages = {1879-93},\n\ttitle = {Geometric and induced effects in binocular stereopsis and motion parallax},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(03)00298-0},\n\tvolume = {43},\n\tyear = {2003},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(03)00298-0}}\n\n
\n
\n\n\n
\n This paper examines and contrasts motion-parallax analogues of the induced-size and induced-shear effects with the equivalent induced effects from binocular disparity. During lateral head motion or with binocular stereopsis, vertical-shear and vertical-size transformations produced 'induced effects' of apparent inclination and slant that are not predicted geometrically. With vertical head motion, horizontal-shear and horizontal-size transformations produced similar analogues of the disparity induced effects. Typically, the induced effects were opposite in direction and slightly smaller in size than the geometric effects. Local induced-shear and induced-size effects could be elicited from motion parallax, but not from disparity, and were most pronounced when the stimulus contained discontinuities in velocity gradient. The implications of these results are discussed in the context of models of depth perception from disparity and structure from motion\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n What are you looking at?.\n \n \n \n \n\n\n \n Huang, J., Schumacher, J., Allison, R., & Zacher, J.\n\n\n \n\n\n\n In CRESTech Innovation Network 2003. 2003.\n \n\n\n\n
\n\n\n\n \n \n \"What-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Huang:2003zn,\n\tauthor = {Huang, J. and Schumacher, J. and Allison, R.S. and Zacher, J.E.},\n\tbooktitle = {CRESTech Innovation Network 2003},\n\tdate-added = {2011-05-09 11:22:27 -0400},\n\tdate-modified = {2011-05-18 16:31:38 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {What are you looking at?},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Huang-What_are_you_looking_at.pdf},\n\tyear = {2003}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n A vision-based head tracking system for fully immersive displays.\n \n \n \n \n\n\n \n Hogue, A., Robinson, M., Jenkin, M. R., & Allison, R.\n\n\n \n\n\n\n In of IPT/EGVE 2003. Seventh Immersive Projection Technology Workshop. Ninth Eurographics Workshop on Virtual Environments, pages 179-87, Zurich, Switzerland, 2003. Eurographics Assoc\n \n\n\n\n
\n\n\n\n \n \n \"A-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2003179-87,\n\tabstract = {Six-sided fully immersive projective displays present complex and novel problems for tracking systems. Existing tracking technologies typically require tracking equipment that is placed in locations or attached to the user in a way that is suitable for typical displays of five or less walls but which would interfere with the immersive experience within a fully enclosed display. This paper presents a vision-based tracking technology for fully-immersive projective displays. The technology relies on the operator wearing a set of laser diodes arranged in a specific configuration and then visually tracking the projection of these lasers on the external walls of the display outside of the user's view. This approach places minimal hardware on the user and no visible tracking equipment is placed within the immersive environment. This paper describes the basic visual tracking system including the hardware and software infrastructure},\n\taddress = {Zurich, Switzerland},\n\tauthor = {Hogue, A. and Robinson, M. and Jenkin, M. R. and Allison, R.S.},\n\tdate-modified = {2011-05-11 13:26:13 -0400},\n\tdoi = {10.1145/769953.769974},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {179-87},\n\tpublisher = {Eurographics Assoc},\n\tseries = {IPT/EGVE 2003. Seventh Immersive Projection Technology Workshop. Ninth Eurographics Workshop on Virtual Environments},\n\ttitle = {A vision-based head tracking system for fully immersive displays},\n\turl-1 = {http://dx.doi.org/10.1145/769953.769974},\n\tyear = {2003},\n\turl-1 = {https://doi.org/10.1145/769953.769974}}\n\n
\n
\n\n\n
\n Six-sided fully immersive projective displays present complex and novel problems for tracking systems. Existing tracking technologies typically require tracking equipment that is placed in locations or attached to the user in a way that is suitable for typical displays of five or less walls but which would interfere with the immersive experience within a fully enclosed display. This paper presents a vision-based tracking technology for fully-immersive projective displays. The technology relies on the operator wearing a set of laser diodes arranged in a specific configuration and then visually tracking the projection of these lasers on the external walls of the display outside of the user's view. This approach places minimal hardware on the user and no visible tracking equipment is placed within the immersive environment. This paper describes the basic visual tracking system including the hardware and software infrastructure\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2002\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Erratum: Effects of horizontal and vertical additive disparity noise on stereoscopic corrugation detection (Vision Research (2001) 41 (3133-3143) PII: S0042698901001833).\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 42(8): 1071. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Erratum:-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20021071,\n\tauthor = {Palmisano, S. and Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2012-07-02 19:29:45 -0400},\n\tdoi = {10.1016/S0042-6989(02)00046-9},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {8},\n\tpages = {1071},\n\ttitle = {Erratum: Effects of horizontal and vertical additive disparity noise on stereoscopic corrugation detection (Vision Research (2001) 41 (3133-3143) PII: S0042698901001833)},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(02)00046-9},\n\tvolume = {42},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(02)00046-9}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simulating self-motion I: cues for the perception of motion.\n \n \n \n \n\n\n \n Harris, L. R., Jenkin, M. R., Zikovitz, D., Redlick, F., Jaekl, P., Jasiobedzka, U. T., Jenkin, H. L., & Allison, R.\n\n\n \n\n\n\n Virtual Reality, 6(2): 75-85. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Simulating-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison200275-85,\n\tabstract = {When people move there are many visual and non-visual cues that can inform them about their movement. Simulating self-motion in a virtual reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self-motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self-motion, and evokes a perception of self-motion that is related to the actual motion in a way that varies with acceleration. Passive physical motion turns out to be a particularly potent self-motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues},\n\tauthor = {Harris, L. R. and Jenkin, M. R. and Zikovitz, D. and Redlick, F. and Jaekl, P. and Jasiobedzka, U. T. and Jenkin, H. L. and Allison, R.S.},\n\tdate-modified = {2012-07-02 19:14:32 -0400},\n\tdoi = {10.1007/s100550200008},\n\tjournal = {Virtual Reality},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {2},\n\tpages = {75-85},\n\ttitle = {Simulating self-motion I: cues for the perception of motion},\n\turl-1 = {http://dx.doi.org/10.1007/s100550200008},\n\tvolume = {6},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1007/s100550200008}}\n\n
\n
\n\n\n
\n When people move there are many visual and non-visual cues that can inform them about their movement. Simulating self-motion in a virtual reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self-motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self-motion, and evokes a perception of self-motion that is related to the actual motion in a way that varies with acceleration. Passive physical motion turns out to be a particularly potent self-motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simulating self-motion II. A virtual reality tricycle.\n \n \n \n \n\n\n \n Allison, R., Harris, L. R., Hogue, A. R., Jasiobedzka, U. T., Jenkin, H. L., Jenkin, M. R., Jaekl, P., Laurence, J. R., Pentile, G., Redlick, F., Zacher, J., & Zikovitz, D.\n\n\n \n\n\n\n Virtual Reality, 6(2): 86-95. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Simulating-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison200286-95,\n\tabstract = {For pt.I see ibid., p.75-85 (2002). When simulating self-motion, virtual reality designers ignore non-visual cues at their peril. But providing non-visual cues presents significant challenges. One approach is to accompany visual displays with corresponding real physical motion to stimulate the non-visual, motion-detecting sensory systems in a natural way. However, allowing real movement requires real space. Technologies such as head mounted displays (HMDs) and CAVE&trade; can be used to provide large immersive visual displays within small physical spaces. It is difficult, however, to provide virtual environments that are as large physically as they are visually. A fundamental problem is that tracking technologies that work well in a small, enclosed environment do not function well over longer distances. Here we describe Trike-a `rideable' computer system that can be used to present large virtual spaces both visually and physically, and thus provide appropriately matched stimulation to both visual and non-visual sensory systems},\n\tauthor = {Allison, R.S. and Harris, L. R. and Hogue, A. R. and Jasiobedzka, U. T. and Jenkin, H. L. and Jenkin, M. R. and Jaekl, P. and Laurence, J. R. and Pentile, G. and Redlick, F. and Zacher, J. and Zikovitz, D.},\n\tdate-modified = {2012-07-02 19:14:08 -0400},\n\tdoi = {10.1007/s100550200009},\n\tjournal = {Virtual Reality},\n\tkeywords = {Augmented & Virtual Reality},\n\tnumber = {2},\n\tpages = {86-95},\n\ttitle = {Simulating self-motion II. A virtual reality tricycle},\n\turl-1 = {http://dx.doi.org/10.1007/s100550200009},\n\tvolume = {6},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1007/s100550200009}}\n\n
\n
\n\n\n
\n For pt.I see ibid., p.75-85 (2002). When simulating self-motion, virtual reality designers ignore non-visual cues at their peril. But providing non-visual cues presents significant challenges. One approach is to accompany visual displays with corresponding real physical motion to stimulate the non-visual, motion-detecting sensory systems in a natural way. However, allowing real movement requires real space. Technologies such as head mounted displays (HMDs) and CAVE™ can be used to provide large immersive visual displays within small physical spaces. It is difficult, however, to provide virtual environments that are as large physically as they are visually. A fundamental problem is that tracking technologies that work well in a small, enclosed environment do not function well over longer distances. Here we describe Trike-a `rideable' computer system that can be used to present large virtual spaces both visually and physically, and thus provide appropriately matched stimulation to both visual and non-visual sensory systems\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inbook\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n .\n \n \n \n \n\n\n \n Allison, R., & Howard, I.\n\n\n \n\n\n\n Models of Disparity Detectors, pages 263-274. Howard, I., editor(s). I. Porteous/Oxford, Toronto, Canada, 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Models-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inbook{Allison:2002lc,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S and Howard, IP.},\n\tbooktitle = {Seeing in Depth Volume {I}: {B}asic Mechanisms},\n\tdate-added = {2011-05-06 11:24:55 -0400},\n\tdate-modified = {2012-07-02 20:46:33 -0400},\n\tdoi = {10.1093/acprof:oso/9780195367607.001.0001},\n\teditor = {I. Howard},\n\tkeywords = {Depth perception},\n\tpages = {263-274},\n\tpublisher = {I. Porteous/Oxford},\n\ttitle = {Models of Disparity Detectors},\n\turl-1 = {http://dx.doi.org/10.1093/acprof:oso/9780195367607.001.0001},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1093/acprof:oso/9780195367607.001.0001}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n MARVIN: a Mobile Automatic Realtime Visual and INertial tracking system.\n \n \n \n\n\n \n Hogue, A., Jenkin, M., Allison, R., Robinson, M., Laurence, J., & Zacher, J.\n\n\n \n\n\n\n In CRESTech Innovation Networking Conference. Toronto, Canada, 10 2002.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hogue:2002xc,\n\taddress = {Toronto, Canada},\n\tauthor = {Hogue, A. and Jenkin, M. and Allison, R.S. and Robinson, M. and Laurence, J. and Zacher, J.},\n\tbooktitle = {CRESTech Innovation Networking Conference},\n\tdate-added = {2011-05-09 14:14:34 -0400},\n\tdate-modified = {2011-09-12 21:48:40 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = {10},\n\ttitle = {MARVIN: a Mobile Automatic Realtime Visual and INertial tracking system},\n\tyear = {2002}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Judging perceptual stability during active rotation and translation in various orientations.\n \n \n \n \n\n\n \n Jaekl, P. M., Allison, R., Harris, L. R., Jenkin, H. L., Jenkin, M. R., Zacher, J. E., & Zikovitz, D. C.\n\n\n \n\n\n\n In Journal of Vision, volume 2, pages 508. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Judging-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jaekl:2002ia,\n\tabstract = {Translation and rotation are detected by different patterns of optic flow and by different divisions of the vestibular system. A given movement (eg. yaw rotation or up/down translation) involves different sensors depending on the orientation of the movement with respect to gravity. Here we assess the contribution of these different sense systems to the ``whole system'' response to self motion. Our subjects' task was to distinguish self produced from external visual motion during rotation around the yaw, pitch and roll axes and during translation in the x (naso-occipital), y (sideways) and z (up and down) directions. The axis or direction of motion was parallel or orthogonal to the direction of gravity.\n\nSubjects wore a helmet-mounted display whose position was monitorred by a mechanical head tracker with minimal lag. The visual display was modified in response to head movement. The ratio between head and image motion was varied randomly using the method of constant stimuli. Subjects indicated whether the display appeared earth-stationary or not.\n\nFor both rotation and translation there was a large range of ratios that was tolerated as perceptually stable. The ratio most likely to be accepted as stable corresponded to visual motion being faster than head motion. For rotation there were no consistent differences between yaw, pitch or roll axes and the orientation of the axis relative to gravity also had no effect. For translation motion in the x direction was on average matched with less visual motion than y or z motion. Although there was no consistent effect of whether motion was parallel or orthogonal to gravity, posture, relative to gravity, did have an effect. },\n\tauthor = {Jaekl, P. M. and Allison, R.S. and Harris, L. R. and Jenkin, H. L. and Jenkin, M. R. and Zacher, J. E. and Zikovitz, D. C.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:59:29 -0400},\n\tdoi = {10.1167/2.7.508},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {7},\n\tpages = {508},\n\ttitle = {Judging perceptual stability during active rotation and translation in various orientations},\n\turl-1 = {http://dx.doi.org/10.1167/2.7.508},\n\tvolume = {2},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1167/2.7.508}}\n\n
\n
\n\n\n
\n Translation and rotation are detected by different patterns of optic flow and by different divisions of the vestibular system. A given movement (eg. yaw rotation or up/down translation) involves different sensors depending on the orientation of the movement with respect to gravity. Here we assess the contribution of these different sense systems to the ``whole system'' response to self motion. Our subjects' task was to distinguish self produced from external visual motion during rotation around the yaw, pitch and roll axes and during translation in the x (naso-occipital), y (sideways) and z (up and down) directions. The axis or direction of motion was parallel or orthogonal to the direction of gravity. Subjects wore a helmet-mounted display whose position was monitorred by a mechanical head tracker with minimal lag. The visual display was modified in response to head movement. The ratio between head and image motion was varied randomly using the method of constant stimuli. Subjects indicated whether the display appeared earth-stationary or not. For both rotation and translation there was a large range of ratios that was tolerated as perceptually stable. The ratio most likely to be accepted as stable corresponded to visual motion being faster than head motion. For rotation there were no consistent differences between yaw, pitch or roll axes and the orientation of the axis relative to gravity also had no effect. For translation motion in the x direction was on average matched with less visual motion than y or z motion. Although there was no consistent effect of whether motion was parallel or orthogonal to gravity, posture, relative to gravity, did have an effect. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extracting self-created retinal motion.\n \n \n \n \n\n\n \n Harris, L. R., Allison, R., Jaekl, P. M., Jenkin, H. L., Jenkin, M. R., Zacher, J. E., & Zikovitz, D. C.\n\n\n \n\n\n\n In Journal of Vision, volume 2, pages 509. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Extracting-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Harris:2002zn,\n\tabstract = {INTRODUCTION. Self movement generates retinal movement that is perceptually distinct from other movement. There are two types of models for how this distinction might be achieved. In the first, after self motion is detected, an internal estimate of the expected retinal movement is subtracted (a linear process) from retinal image movement. Remaining movement is interpretted as indicating external movement. In the second model, subjects internally compare observed visual motion with their internal representation: a non-linear ratio judgement which depends on the magnitude of the expected movement. A discriminable difference indicates external movement. These models respectively predict linear and non-linear distributions of the probability of regarding a given retinal motion as perceptually stable. METHODS. Our subjects' task was to distinguish self-produced from external visual motion during rotation around the yaw, pitch and roll axes and during translation in the x, y and z directions. They wore a helmet-mounted display whose position was monitorred by a mechanical head tracker with minimal lag. The visual display was modified in response to head movement. The ratio between head and image motion was adjusted by the subject until the display appeared earth-stationary. RESULTS. The distribution of ratios judged to be perceptually stable were fitted with a normal and a log normal distribution. For the rotation data a better fit was found using the log normal distribution suggesting that the non-linear ratio model is a better description of the underlying neural computations involved. No clear difference was found for the translation data. },\n\tauthor = {Harris, L. R. and Allison, R.S. and Jaekl, P. M. and Jenkin, H. L. and Jenkin, M. R. and Zacher, J. E. and Zikovitz, D. C.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:59:00 -0400},\n\tdoi = {10.1167/2.7.509},\n\tjournal = {Journal of Vision},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {7},\n\tpages = {509},\n\ttitle = {Extracting self-created retinal motion},\n\turl-1 = {http://dx.doi.org/10.1167/2.7.509},\n\tvolume = {2},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1167/2.7.509}}\n\n
\n
\n\n\n
\n INTRODUCTION. Self movement generates retinal movement that is perceptually distinct from other movement. There are two types of models for how this distinction might be achieved. In the first, after self motion is detected, an internal estimate of the expected retinal movement is subtracted (a linear process) from retinal image movement. Remaining movement is interpretted as indicating external movement. In the second model, subjects internally compare observed visual motion with their internal representation: a non-linear ratio judgement which depends on the magnitude of the expected movement. A discriminable difference indicates external movement. These models respectively predict linear and non-linear distributions of the probability of regarding a given retinal motion as perceptually stable. METHODS. Our subjects' task was to distinguish self-produced from external visual motion during rotation around the yaw, pitch and roll axes and during translation in the x, y and z directions. They wore a helmet-mounted display whose position was monitorred by a mechanical head tracker with minimal lag. The visual display was modified in response to head movement. The ratio between head and image motion was adjusted by the subject until the display appeared earth-stationary. RESULTS. The distribution of ratios judged to be perceptually stable were fitted with a normal and a log normal distribution. For the rotation data a better fit was found using the log normal distribution suggesting that the non-linear ratio model is a better description of the underlying neural computations involved. No clear difference was found for the translation data. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Induced effects in motion parallax.\n \n \n \n \n\n\n \n Allison, R., Rogers, B., & Bradshaw, M.\n\n\n \n\n\n\n In Journal of Vision, volume 2, pages 661-661. 2002.\n \n\n\n\n
\n\n\n\n \n \n \"Induced-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison2002661-661,\n\tabstract = {Ogle's induced-size effect refers to the percept of slant elicited by a difference in vertical size between the left and right half images of a stereoscopic display. The effect is not readily predicted by the geometry of the situation and has been of considerable interest in the stereoscopic literature. Rogers and Koenderink (Nature, 322: 62-63) demonstrated that modulation of the vertical size of a monocular image during lateral head motion produces the impression of a surface slanted in depth - a motion-parallax analogue of the induced-size effect. We investigated motion parallax analogues of the induced-size and induced-shear effects further and compared them with the corresponding stereoscopic versions. During lateral head motion or with binocular stereopsis, vertical-shear and vertical-size transformations produced 'induced effects' of apparent inclination and slant that are not predicted geometrically. With vertical head motion, horizontal-shear and horizontal-size transformations produced similar analogues of the disparity induced effects. Typically, the induced effects were opposite in direction and slightly smaller than the geometric effects. For both stereopsis and motion parallax, relative slant and inclination were more pronounced when the stimulus contained discontinuities in disparity/velocity gradient than for continuous disparity/flow fields. The results have important implications for the processing of disparity and optic flow fields. The support of the McDonnell-Pew Centre for Cognitive Neuroscience is greatly appreciated.},\n\tauthor = {Allison, R.S. and Rogers, B.J. and Bradshaw, M.F.},\n\tbooktitle = {Journal of Vision},\n\tdate-modified = {2012-07-02 17:39:17 -0400},\n\tdoi = {10.1167/2.7.661},\n\tjournal = {Journal of Vision},\n\tkeywords = {Stereopsis},\n\tnumber = {7},\n\tpages = {661-661},\n\ttitle = {Induced effects in motion parallax},\n\turl-1 = {http://dx.doi.org/10.1167/2.7.661},\n\tvolume = {2},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1167/2.7.661}}\n
\n
\n\n\n
\n Ogle's induced-size effect refers to the percept of slant elicited by a difference in vertical size between the left and right half images of a stereoscopic display. The effect is not readily predicted by the geometry of the situation and has been of considerable interest in the stereoscopic literature. Rogers and Koenderink (Nature, 322: 62-63) demonstrated that modulation of the vertical size of a monocular image during lateral head motion produces the impression of a surface slanted in depth - a motion-parallax analogue of the induced-size effect. We investigated motion parallax analogues of the induced-size and induced-shear effects further and compared them with the corresponding stereoscopic versions. During lateral head motion or with binocular stereopsis, vertical-shear and vertical-size transformations produced 'induced effects' of apparent inclination and slant that are not predicted geometrically. With vertical head motion, horizontal-shear and horizontal-size transformations produced similar analogues of the disparity induced effects. Typically, the induced effects were opposite in direction and slightly smaller than the geometric effects. For both stereopsis and motion parallax, relative slant and inclination were more pronounced when the stimulus contained discontinuities in disparity/velocity gradient than for continuous disparity/flow fields. The results have important implications for the processing of disparity and optic flow fields. The support of the McDonnell-Pew Centre for Cognitive Neuroscience is greatly appreciated.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Egocentric direction and the visual guidance of robot locomotion background, theory and implementation.\n \n \n \n \n\n\n \n Rushton, S. K., Wen, J., & Allison, R.\n\n\n \n\n\n\n In Bulthoff, H. H., Lee, S. W., Poggio, T. A., & Wallraven, C., editor(s), Biologically Motivated Computer Vision, Proceedings, volume 2525, of Lecture Notes in Computer Science, pages 576-591, Berlin, 2002. Springer-Verlag Berlin\n \n\n\n\n
\n\n\n\n \n \n \"Egocentric-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2002576-591,\n\tabstract = {In this paper we describe the motivation, design and implementation of a system to visually guide a locomoting robot towards a target and around obstacles. The work was inspired by a recent suggestion that walking humans rely on perceived egocentric direction rather than optic flow to guide locomotion to a target. We briefly summarise the human experimental work and then illustrate how direction based heuristics can be used in the visual guidance of locomotion. We also identify perceptual variables that could be used in the detection of obstacles and a control law for the regulation of obstacle avoidance. We describe simulations that demonstrate the utility of the approach and the implementation of these control laws on a Nomad mobile robot. We conclude that our simple biologically inspired solution produces robust behaviour and proves a very promising approach.},\n\taddress = {Berlin},\n\tauthor = {Rushton, S. K. and Wen, J. and Allison, R.S.},\n\tbooktitle = {Biologically Motivated Computer Vision, Proceedings},\n\tdate-modified = {2011-05-11 13:11:27 -0400},\n\teditor = {Bulthoff, H. H. and Lee, S. W. and Poggio, T. A. and Wallraven, C.},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tpages = {576-591},\n\tpublisher = {Springer-Verlag Berlin},\n\tseries = {Lecture Notes in Computer Science},\n\ttitle = {Egocentric direction and the visual guidance of robot locomotion background, theory and implementation},\n\turl-1 = {http://portal.acm.org/citation.cfm?id=751732},\n\tvolume = {2525},\n\tyear = {2002}}\n\n
\n
\n\n\n
\n In this paper we describe the motivation, design and implementation of a system to visually guide a locomoting robot towards a target and around obstacles. The work was inspired by a recent suggestion that walking humans rely on perceived egocentric direction rather than optic flow to guide locomotion to a target. We briefly summarise the human experimental work and then illustrate how direction based heuristics can be used in the visual guidance of locomotion. We also identify perceptual variables that could be used in the detection of obstacles and a control law for the regulation of obstacle avoidance. We describe simulations that demonstrate the utility of the approach and the implementation of these control laws on a Nomad mobile robot. We conclude that our simple biologically inspired solution produces robust behaviour and proves a very promising approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n IVY: The Immersive Virtual environment at York.\n \n \n \n \n\n\n \n Robinson, M., Laurence, J., Zacher, J., Hogue, A., Allison, R., Jenkin, M., Harris, L. R., & Stuerzlinger, W.\n\n\n \n\n\n\n In 7th Annual Immersive Projection Technology (IPT) Symposium, pages electronic proceedings, Orlando, FL USA, 2002. \n \n\n\n\n
\n\n\n\n \n \n \"IVY:-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Robinson:2002xa,\n\taddress = {Orlando, FL USA},\n\tauthor = {Robinson, M. and Laurence, J. and Zacher, J. and Hogue, A. and Allison, R.S. and Jenkin, M. and Harris, L. R. and Stuerzlinger, W.},\n\tbooktitle = {7th Annual Immersive Projection Technology (IPT) Symposium},\n\tdate-modified = {2011-05-11 13:23:58 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {electronic proceedings},\n\ttitle = {IVY: The Immersive Virtual environment at York},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Robinson-IVY-Immersive_Visual_Environment_at_York.pdf},\n\tyear = {2002}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual stability during head movement in virtual reality.\n \n \n \n \n\n\n \n Jaekl, P. M., Allison, R., Harris, L. R., Jasiobedzka, U. T., Jenkin, H. L., Jenkin, M. R., Zacher, J. E., & Zikovitz, D. C.\n\n\n \n\n\n\n In Loftin, B., Chen, J. X., Rizzo, S., Goebel, M., & Hirose, M., editor(s), Ieee Virtual Reality 2002, Proceedings, of Proceedings of the Ieee Virtual Reality Annual International Symposium, pages 149-155, Los Alamitos, 2002. Ieee Computer Soc\n \n\n\n\n
\n\n\n\n \n \n \"Perceptual-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2002149-155,\n\tabstract = {Virtual reality displays introduce spatial distortions that are very hard to correct because of the difficulty of precisely modelling the camera from the nodal point of each eye. How significant are these distortions for spatial perception in virtual reality? In this study we used a helmet mounted display and a mechanical head tracker to investigate the tolerance to errors between head motions and the resulting visual display. The relationship between the head movement and the associated updating of the visual display was adjusted by subjects until the image was judged as stable relative to the world. Both rotational and translational movements were tested and the relationship between the movements and the direction of gravity was varied systematically. Typically, for the display to be judged as stable, subjects needed the visual world to be moved in the opposite direction of the head movement by an amount greater than the head movement itself, during both rotational and translational head movements, although a large range of movement was tolerated and judged as appearing stable. These results suggest that it not necessary to model the visual geometry accurately and suggest circumstances when tracker drift can be corrected by jumps in the display which will pass unnoticed by the user.},\n\taddress = {Los Alamitos},\n\tauthor = {Jaekl, P. M. and Allison, R.S. and Harris, L. R. and Jasiobedzka, U. T. and Jenkin, H. L. and Jenkin, M. R. and Zacher, J. E. and Zikovitz, D. C.},\n\tbooktitle = {Ieee Virtual Reality 2002, Proceedings},\n\tdate-modified = {2011-05-11 13:23:59 -0400},\n\tdoi = {10.1109/VR.2002.996517},\n\teditor = {Loftin, B. and Chen, J. X. and Rizzo, S. and Goebel, M. and Hirose, M.},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {149-155},\n\tpublisher = {Ieee Computer Soc},\n\tseries = {Proceedings of the Ieee Virtual Reality Annual International Symposium},\n\ttitle = {Perceptual stability during head movement in virtual reality},\n\turl-1 = {http://dx.doi.org/10.1109/VR.2002.996517},\n\tyear = {2002},\n\turl-1 = {https://doi.org/10.1109/VR.2002.996517}}\n\n
\n
\n\n\n
\n Virtual reality displays introduce spatial distortions that are very hard to correct because of the difficulty of precisely modelling the camera from the nodal point of each eye. How significant are these distortions for spatial perception in virtual reality? In this study we used a helmet mounted display and a mechanical head tracker to investigate the tolerance to errors between head motions and the resulting visual display. The relationship between the head movement and the associated updating of the visual display was adjusted by subjects until the image was judged as stable relative to the world. Both rotational and translational movements were tested and the relationship between the movements and the direction of gravity was varied systematically. Typically, for the display to be judged as stable, subjects needed the visual world to be moved in the opposite direction of the head movement by an amount greater than the head movement itself, during both rotational and translational head movements, although a large range of movement was tolerated and judged as appearing stable. These results suggest that it not necessary to model the visual geometry accurately and suggest circumstances when tracker drift can be corrected by jumps in the display which will pass unnoticed by the user.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2001\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effects of horizontal and vertical additive disparity noise on stereoscopic corrugation detection.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 41(24): 3133-43. 2001.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20013133-43,\n\tabstract = {Stereoscopic corrugation detection in the presence of horizontal- and vertical- additive disparity noise was examined using a signal detection paradigm. Random-dot stereograms either represented a 3-D square-wave surface with various amounts of Gaussian-distributed additive disparity noise or had the same disparity values randomly redistributed. Stereoscopic detection of 2 arcmin peak amplitude corrugations was found to tolerate significantly greater amplitudes of vertical-disparity noise than horizontal-disparity noise irrespective of whether the corrugations were horizontally or vertically oriented. However, this directional difference in tolerance to disparity noise was found to reverse when the corrugation and noise amplitudes were increased (so as to produce equivalent signal-to-noise ratios). These results suggest that horizontal- and vertical-disparity noise pose different problems for dot-matching and post-matching surface reconstruction as corrugation and noise amplitudes increase},\n\tauthor = {Palmisano, S. and Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2012-07-02 19:16:14 -0400},\n\tdoi = {10.1016/S0042-6989(01)00183-3},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {24},\n\tpages = {3133-43},\n\ttitle = {Effects of horizontal and vertical additive disparity noise on stereoscopic corrugation detection},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(01)00183-3},\n\tvolume = {41},\n\tyear = {2001},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(01)00183-3}}\n\n
\n
\n\n\n
\n Stereoscopic corrugation detection in the presence of horizontal- and vertical- additive disparity noise was examined using a signal detection paradigm. Random-dot stereograms either represented a 3-D square-wave surface with various amounts of Gaussian-distributed additive disparity noise or had the same disparity values randomly redistributed. Stereoscopic detection of 2 arcmin peak amplitude corrugations was found to tolerate significantly greater amplitudes of vertical-disparity noise than horizontal-disparity noise irrespective of whether the corrugations were horizontally or vertically oriented. However, this directional difference in tolerance to disparity noise was found to reverse when the corrugation and noise amplitudes were increased (so as to produce equivalent signal-to-noise ratios). These results suggest that horizontal- and vertical-disparity noise pose different problems for dot-matching and post-matching surface reconstruction as corrugation and noise amplitudes increase\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n A Stereo-Vision based system for localization of the VR Trike.\n \n \n \n\n\n \n Hogue, A., Jenkin, M., & Allison, R.\n\n\n \n\n\n\n In CITO Knowledge Network Conference: Beyond the Edge: Road Mapping Innovation. Nepean, Canada, 10 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hogue:2001ce,\n\taddress = {Nepean, Canada},\n\tauthor = {Hogue, A. and Jenkin, M. and Allison, R.},\n\tbooktitle = {CITO Knowledge Network Conference: Beyond the Edge: Road Mapping Innovation},\n\tdate-added = {2011-05-09 14:22:17 -0400},\n\tdate-modified = {2011-09-12 22:18:19 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = {10},\n\ttitle = {A Stereo-Vision based system for localization of the VR Trike},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perceptual Stability in Virtual Environments II: Stability during head translation.\n \n \n \n\n\n \n Zikovitz, D., Jenkin, M., Harris, L., Allison, R., Jaekl, P., Jasiobedzka, U., & Zacher, J.\n\n\n \n\n\n\n In Proceedings of the IRIS-PRECARN 11th Annual Conference. 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zikovitz:2001vo,\n\tauthor = {Zikovitz, D.,C. and Jenkin, M. and Harris, L.R. and Allison, R.S. and Jaekl, P. and Jasiobedzka, U. and Zacher, J.E.},\n\tbooktitle = {Proceedings of the IRIS-PRECARN 11th Annual Conference},\n\tdate-added = {2011-05-09 11:28:26 -0400},\n\tdate-modified = {2012-10-12 15:11:14 +0000},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Perceptual Stability in Virtual Environments II: Stability during head translation},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Are there temporal limits for post-matching stereoscopic processing?.\n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In International Conference on Levels of Perception. Toronto, Canada, 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2001fm,\n\taddress = {Toronto, Canada},\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {International Conference on Levels of Perception},\n\tdate-added = {2011-05-09 11:27:11 -0400},\n\tdate-modified = {2011-05-22 13:47:58 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Are there temporal limits for post-matching stereoscopic processing?},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perceptual Stability in Virtual Environments III: Psychophysics in Microgravity.\n \n \n \n\n\n \n Jasiobedzka, U., Jenkin, M., Harris, L., Allison, R., Jaekl, P., Zacher, J., Jenkin, H., & Zikovitz, D.\n\n\n \n\n\n\n In Proceedings of the IRIS-PRECARN 11th Annual Conference. 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jasiobedzka:2001nz,\n\tauthor = {Jasiobedzka, U. and Jenkin, M. and Harris, L.R. and Allison, R.S. and Jaekl, P. and Zacher, J.E. and Jenkin, H. and Zikovitz, D.C.},\n\tbooktitle = {Proceedings of the IRIS-PRECARN 11th Annual Conference},\n\tdate-added = {2011-05-09 11:26:10 -0400},\n\tdate-modified = {2011-11-06 16:56:58 -0500},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Perceptual Stability in Virtual Environments III: Psychophysics in Microgravity},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perceptual Stability in Virtual Environments I: Stability during rotation.\n \n \n \n\n\n \n Jaekl, P., Allison, R., Harris, L., Jasiobedzka, U., Jenkin, H., Jenkin, M., Zacher, J., & Zikovitz, D.\n\n\n \n\n\n\n In Proceedings of the IRIS-PRECARN 11th Annual Conference. 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Jaekl:2001jf,\n\tauthor = {Jaekl, P.M. and Allison, R.S. and Harris, L.R. and Jasiobedzka, U.T. and Jenkin, H.L. and Jenkin, M.R. and Zacher, J.E. and Zikovitz, D.C.},\n\tbooktitle = {Proceedings of the IRIS-PRECARN 11th Annual Conference},\n\tdate-added = {2011-05-09 11:24:45 -0400},\n\tdate-modified = {2011-05-18 16:09:32 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Perceptual Stability in Virtual Environments I: Stability during rotation},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Untethered, Wireless Pose Tracking for Virtual Reality.\n \n \n \n\n\n \n Hogue, A, Robinson, M, Jenkin, M, & Allison, R.\n\n\n \n\n\n\n In Proceedings of the IRIS-PRECARN 11th Annual Conference. 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Hogue:2001ea,\n\tauthor = {Hogue, A and Robinson, M and Jenkin, M and Allison, R.},\n\tbooktitle = {Proceedings of the IRIS-PRECARN 11th Annual Conference},\n\tdate-added = {2011-05-09 11:23:51 -0400},\n\tdate-modified = {2011-05-18 16:25:06 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\ttitle = {Untethered, Wireless Pose Tracking for Virtual Reality},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of Decorrelation on Stereoscopic Surface Perception with Static and Dynamic Random-dot Stereograms.\n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Australian Journal of Psychology. 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2001we,\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Australian Journal of Psychology},\n\tdate-added = {2011-05-06 17:02:20 -0400},\n\tdate-modified = {2011-05-18 15:49:46 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Effects of Decorrelation on Stereoscopic Surface Perception with Static and Dynamic Random-dot Stereograms},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Growing IVY: Building the Immersive Virtual environment at York.\n \n \n \n \n\n\n \n Robinson, M., Laurence, J., Zacher, J., Hogue, A., Allison, R., Jenkin, M., Harris, L. R., & Stuerzlinger, W.\n\n\n \n\n\n\n In ICAT'2002 - 11th International Conference on Artificial Reality and Telexistance, Tokyo, Dec 5th-7th 2001. \n \n\n\n\n
\n\n\n\n \n \n \"Growing-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Robinson:2001or,\n\tabstract = {When we move about within our environment, we are presented with a range of cues to our motion. Virtual reality systems attempt to simulate these various sensory cues. IVY -- the Immersive Visual environment at York -- is a virtual environment being constructed at York University to investigate aspects of human perception and to examine the relative importance of various visual and\nnon-visual cues to the generation of an effective virtual environment. This paper describes the movation behind the design of IVY, and describes the design of the essential hardware and software components.},\n\taddress = {Tokyo},\n\tauthor = {Robinson, M. and Laurence, J. and Zacher, J. and Hogue, A. and Allison, R.S. and Jenkin, M. and Harris, L. R. and Stuerzlinger, W.},\n\tbooktitle = {ICAT'2002 - 11th International Conference on Artificial Reality and Telexistance},\n\tdate-added = {2011-05-06 14:15:19 -0400},\n\tdate-modified = {2011-05-18 15:58:19 -0400},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = {Dec 5th-7th},\n\ttitle = {Growing IVY: Building the Immersive Virtual environment at York},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Robinson-Growing_IVY.pdf},\n\tyear = {2001}}\n\n
\n
\n\n\n
\n When we move about within our environment, we are presented with a range of cues to our motion. Virtual reality systems attempt to simulate these various sensory cues. IVY – the Immersive Visual environment at York – is a virtual environment being constructed at York University to investigate aspects of human perception and to examine the relative importance of various visual and non-visual cues to the generation of an effective virtual environment. This paper describes the movation behind the design of IVY, and describes the design of the essential hardware and software components.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tolerance of temporal delay in virtual environments.\n \n \n \n \n\n\n \n Allison, R., Harris, L. R., Jenkin, M., Jasiobedzka, U., & Zacher, J. E.\n\n\n \n\n\n\n In Takemura, H., & Kiyokawa, K., editor(s), Ieee Virtual Reality 2001, Proceedings, of Proceedings of the Ieee Virtual Reality Annual International Symposium, pages 247-254, Yokohama, Japan, 2001. Ieee Computer Soc,Los Alamitos\n \n\n\n\n
\n\n\n\n \n \n \"Tolerance-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2001247-254,\n\tabstract = {To enhance presence, facilitate sensory motor performance, and avoid disorientation or nausea, virtual-reality applications require the percept of a stable environment. End-end tracking latency (display lag) degrades this illusion of stability and has been identified as a major fault of existing virtual-environment systems. Oscillopsia refers to the perception that the visual world appears to swim about or oscillate in space and is a manifestation of this loss of perceptual stability of the environment. In this paper the effects of end-end latency and head velocity on perceptual stability in a virtual environment rr ere investigated psychophysically. Subjects became significantly more likely to report oscillopsia during head movements when end-end latency or head velocity were increased. It is concluded that perceptual instability of the world arises with increased head motion and increased display lag. Oscillopsia is expected to be more apparent in tasks requiring real locomotion or rapid head movement.},\n\taddress = {Yokohama, Japan},\n\tauthor = {Allison, R.S. and Harris, L. R. and Jenkin, M. and Jasiobedzka, U. and Zacher, J. E.},\n\tbooktitle = {Ieee Virtual Reality 2001, Proceedings},\n\tdate-modified = {2011-05-11 13:24:02 -0400},\n\tdoi = {10.1109/VR.2001.913793},\n\teditor = {Takemura, H. and Kiyokawa, K.},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {247-254},\n\tpublisher = {Ieee Computer Soc,Los Alamitos},\n\tseries = {Proceedings of the Ieee Virtual Reality Annual International Symposium},\n\ttitle = {Tolerance of temporal delay in virtual environments},\n\turl-1 = {http://dx.doi.org/10.1109/VR.2001.913793},\n\tyear = {2001},\n\turl-1 = {https://doi.org/10.1109/VR.2001.913793}}\n\n
\n
\n\n\n
\n To enhance presence, facilitate sensory motor performance, and avoid disorientation or nausea, virtual-reality applications require the percept of a stable environment. End-end tracking latency (display lag) degrades this illusion of stability and has been identified as a major fault of existing virtual-environment systems. Oscillopsia refers to the perception that the visual world appears to swim about or oscillate in space and is a manifestation of this loss of perceptual stability of the environment. In this paper the effects of end-end latency and head velocity on perceptual stability in a virtual environment rr ere investigated psychophysically. Subjects became significantly more likely to report oscillopsia during head movements when end-end latency or head velocity were increased. It is concluded that perceptual instability of the world arises with increased head motion and increased display lag. Oscillopsia is expected to be more apparent in tasks requiring real locomotion or rapid head movement.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Optical Alignment and Vergence Issues in Hemet-Mounted Displays.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n 11 2001.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:2001zi,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S.},\n\tbooktitle = {CRESTECH/GEOIDE Enhanced/Synthetic Vision Systems Workshop},\n\tdate-added = {2011-05-09 14:19:39 -0400},\n\tdate-modified = {2012-02-20 21:10:37 -0500},\n\tkeywords = {Vergence},\n\tmonth = {11},\n\ttitle = {Optical Alignment and Vergence Issues in Hemet-Mounted Displays},\n\tyear = {2001}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2000\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effects of stimulus size and eccentricity on horizontal and vertical vergence.\n \n \n \n \n\n\n \n Howard, I. P., Fang, X. P., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n Experimental Brain Research, 130(2): 124-132. 2000.\n \n\n\n\n
\n\n\n\n \n \n \"Effects-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison2000124-132,\n\tabstract = {We measured the gain and phase of horizontal and vertical vergences of five subjects: as a function of stimulus area and position. Vergence eye movements were recorded by the scleral search coil method as subjects observed dichoptic displays oscillating in antiphase either from side to side or up and down with a peak-to-peak magnitude of 0.5 degrees at either 0.1 Hz or 1.0 Hz. The stimulus was a central textured disc with diameter ranging from 0.75 degrees to 65 degrees, or a peripheral annulus with outer diameter 65 degrees and inner diameter ranging from 5 degrees to 45 degrees. The remaining field was black. For horizontal vergence at both stimulus frequencies, gain and the phase lag were about the same for a 0.75 degrees stimulus as for a 65 degrees central stimulus. For vertical vergence, mean gain increased and mean phase lag decreased with increasing diameter of the central stimulus up to approximately 20 degrees. Thus, the stimulus integration area is much smaller for horizontal vergence than for vertical vergence. The integration area for vertical vergence is similar to that for cyclovergence, as revealed in a previous study. For both types of vergence, response gains were higher and phase lags smaller at 0.1 Hz than at 1.0 Hz. Also, gain decreased and phase lag increased with increasing occlusion of the central region of the stimulus. Vergence gain was significantly higher for a 45 degrees central disc than for a peripheral annulus with the same area. Thus, the central retina has more power to evoke horizontal or vertical vergence than the same area in the periphery. We compare the results with similar data for cyclovergence and discuss their ecological implications.},\n\tauthor = {Howard, I. P. and Fang, X. P. and Allison, R.S. and Zacher, J. E.},\n\tdate-modified = {2012-07-02 19:19:49 -0400},\n\tdoi = {10.1007/s002210050014},\n\tjournal = {Experimental Brain Research},\n\tkeywords = {Vergence},\n\tnumber = {2},\n\tpages = {124-132},\n\ttitle = {Effects of stimulus size and eccentricity on horizontal and vertical vergence},\n\turl-1 = {http://dx.doi.org/10.1007/s002210050014},\n\tvolume = {130},\n\tyear = {2000},\n\turl-1 = {https://doi.org/10.1007/s002210050014}}\n\n
\n
\n\n\n
\n We measured the gain and phase of horizontal and vertical vergences of five subjects: as a function of stimulus area and position. Vergence eye movements were recorded by the scleral search coil method as subjects observed dichoptic displays oscillating in antiphase either from side to side or up and down with a peak-to-peak magnitude of 0.5 degrees at either 0.1 Hz or 1.0 Hz. The stimulus was a central textured disc with diameter ranging from 0.75 degrees to 65 degrees, or a peripheral annulus with outer diameter 65 degrees and inner diameter ranging from 5 degrees to 45 degrees. The remaining field was black. For horizontal vergence at both stimulus frequencies, gain and the phase lag were about the same for a 0.75 degrees stimulus as for a 65 degrees central stimulus. For vertical vergence, mean gain increased and mean phase lag decreased with increasing diameter of the central stimulus up to approximately 20 degrees. Thus, the stimulus integration area is much smaller for horizontal vergence than for vertical vergence. The integration area for vertical vergence is similar to that for cyclovergence, as revealed in a previous study. For both types of vergence, response gains were higher and phase lags smaller at 0.1 Hz than at 1.0 Hz. Also, gain decreased and phase lag increased with increasing occlusion of the central region of the stimulus. Vergence gain was significantly higher for a 45 degrees central disc than for a peripheral annulus with the same area. Thus, the central retina has more power to evoke horizontal or vertical vergence than the same area in the periphery. We compare the results with similar data for cyclovergence and discuss their ecological implications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth selectivity of vertical fusional mechanisms.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., & Fang, X.\n\n\n \n\n\n\n Vision Research, 40(21): 2985-98. 2000.\n \n\n\n\n
\n\n\n\n \n \n \"Depth-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20002985-98,\n\tabstract = {We measured the ability to fuse dichoptic images of a horizontal line alone or in the presence of a textured background with different vertical disparity. Nonius-line measurements of vertical vergence were also obtained. Diplopia thresholds and vertical vergence gains were much higher in response to an isolated vertically disparate line than to one with a zero vertical-disparity background. The effect of the background was maximum when it was coplanar with the target and decreased with increasing relative horizontal disparity. We conclude that vertical disparities are integrated over a restricted range of horizontal disparities to drive vertical vergence},\n\tauthor = {Allison, R.S. and Howard, I. P. and Fang, X.},\n\tdate-modified = {2011-05-11 13:24:56 -0400},\n\tdoi = {10.1016/S0042-6989(00)00150-4},\n\tjournal = {Vision Research},\n\tkeywords = {Vergence},\n\tnumber = {21},\n\tpages = {2985-98},\n\ttitle = {Depth selectivity of vertical fusional mechanisms},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(00)00150-4},\n\tvolume = {40},\n\tyear = {2000},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(00)00150-4}}\n\n
\n
\n\n\n
\n We measured the ability to fuse dichoptic images of a horizontal line alone or in the presence of a textured background with different vertical disparity. Nonius-line measurements of vertical vergence were also obtained. Diplopia thresholds and vertical vergence gains were much higher in response to an isolated vertically disparate line than to one with a zero vertical-disparity background. The effect of the background was maximum when it was coplanar with the target and decreased with increasing relative horizontal disparity. We conclude that vertical disparities are integrated over a restricted range of horizontal disparities to drive vertical vergence\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Temporal dependencies in resolving monocular and binocular cue conflict in slant perception.\n \n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 40(14): 1869-86. 2000.\n \n\n\n\n
\n\n\n\n \n \n \"Temporal-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20001869-86,\n\tabstract = {Observers viewed large dichoptic patterns undergoing smooth temporal modulations or step changes in simulated slant or inclination under various conditions of disparity-perspective cue conflict and concordance. After presentation of each test surface, subjects adjusted a comparison surface to match the perceived slant or inclination of the test surface. Addition of conflicting perspective to disparity affected slant and inclination perception more for brief than for long presentations. Perspective had more influence for smooth temporal changes than for step changes in slant or inclination and for surfaces presented in isolation rather than with a zero disparity frame. These results indicate that conflicting perspective information plays a dominant role in determining the temporal properties of perceived slant and inclination},\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2011-05-11 13:19:35 -0400},\n\tdoi = {10.1016/S0042-6989(00)00034-1},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {14},\n\tpages = {1869-86},\n\ttitle = {Temporal dependencies in resolving monocular and binocular cue conflict in slant perception},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(00)00034-1},\n\tvolume = {40},\n\tyear = {2000},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(00)00034-1}}\n\n
\n
\n\n\n
\n Observers viewed large dichoptic patterns undergoing smooth temporal modulations or step changes in simulated slant or inclination under various conditions of disparity-perspective cue conflict and concordance. After presentation of each test surface, subjects adjusted a comparison surface to match the perceived slant or inclination of the test surface. Addition of conflicting perspective to disparity affected slant and inclination perception more for brief than for long presentations. Perspective had more influence for smooth temporal changes than for step changes in slant or inclination and for surfaces presented in isolation rather than with a zero disparity frame. These results indicate that conflicting perspective information plays a dominant role in determining the temporal properties of perceived slant and inclination\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereopsis with persisting and dynamic textures.\n \n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Vision Research, 40(28): 3823-7. 2000.\n \n\n\n\n
\n\n\n\n \n \n \"Stereopsis-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison20003823-7,\n\tabstract = {The authors measured the percept of changing depth from changing disparity in stereograms composed of random-dot textures that were either persistent or dynamically changed on every frame (a dynamic random-dot stereogram). Disparity was changed between frames to depict a surface undergoing smooth temporal changes in simulated slant. Matched depth was greater with dynamic random-dot stereograms than with persistent random-dot stereograms. These results confirm and extend earlier observations at depth threshold. The authors posit an explanation based on cue conflict between stereopsis and monocular depth cues},\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tdate-modified = {2012-07-02 19:18:19 -0400},\n\tdoi = {10.1016/S0042-6989(00)00223-6},\n\tjournal = {Vision Research},\n\tkeywords = {Stereopsis},\n\tnumber = {28},\n\tpages = {3823-7},\n\ttitle = {Stereopsis with persisting and dynamic textures},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(00)00223-6},\n\tvolume = {40},\n\tyear = {2000},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(00)00223-6}}\n\n
\n
\n\n\n
\n The authors measured the percept of changing depth from changing disparity in stereograms composed of random-dot textures that were either persistent or dynamically changed on every frame (a dynamic random-dot stereogram). Disparity was changed between frames to depict a surface undergoing smooth temporal changes in simulated slant. Matched depth was greater with dynamic random-dot stereograms than with persistent random-dot stereograms. These results confirm and extend earlier observations at depth threshold. The authors posit an explanation based on cue conflict between stereopsis and monocular depth cues\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n What is the problem with binocular correspondence?.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Australian Journal of Psychology. 2000.\n \n\n\n\n
\n\n\n\n \n \n \"What-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:2000km,\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Australian Journal of Psychology},\n\tdate-added = {2011-05-06 17:03:14 -0400},\n\tdate-modified = {2011-05-18 16:31:46 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {What is the problem with binocular correspondence?},\n\turl-1 = {http://onlinelibrary.wiley.com/doi/10.1080/00049530008255108/pdf},\n\tyear = {2000}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Vertical fusional range increases with depth separation from competing stimuli.\n \n \n \n\n\n \n Allison, R., Howard, I. P., & Fang, X.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 41, pages 3766B864. 2000.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison20003766B864,\n\tauthor = {Allison, R.S. and Howard, I. P. and Fang, X.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 21:55:52 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Vergence},\n\tnumber = {4},\n\tpages = {3766B864},\n\ttitle = {Vertical fusional range increases with depth separation from competing stimuli},\n\tvolume = {41},\n\tyear = {2000}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effect of noise on stereoscopic surface perception in humans and ideal observers.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Proceedings of the ICSC Symposia on Intelligent Systems and Applications, volume 1, pages 1006-1012, 2000. \n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Palmisano:2000wl,\n\tabstract = {Abstract: Stereoscopic surface detection of human and ideal observers was assessed using a signal detection paradigm. Signal displays were disparity defined sinusoidal or square wave corrugations in depth containing various amounts of additive disparity noise. Distracter displays were created by scrambling pure signal stimuli along the vertical dimension - destroying surface representation while leaving the depth range intact. Additive disparity noise was found to interfere with stereoscopic surface detection for both human and ideal observers. Efficiencies found for stereoscopic surface detection were similar to those found previously for detection of a single step edge in depth (a supposedly\neasier task).},\n\tauthor = {Palmisano, S. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Proceedings of the ICSC Symposia on Intelligent Systems and Applications},\n\tdate-added = {2011-05-06 14:19:04 -0400},\n\tdate-modified = {2011-05-22 18:01:55 -0400},\n\tkeywords = {Stereopsis},\n\tpages = {1006-1012},\n\ttitle = {Effect of noise on stereoscopic surface perception in humans and ideal observers},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/Palmisano-Effect_of_Disparity_Noise.pdf},\n\tvolume = {1},\n\tyear = {2000}}\n\n
\n
\n\n\n
\n Abstract: Stereoscopic surface detection of human and ideal observers was assessed using a signal detection paradigm. Signal displays were disparity defined sinusoidal or square wave corrugations in depth containing various amounts of additive disparity noise. Distracter displays were created by scrambling pure signal stimuli along the vertical dimension - destroying surface representation while leaving the depth range intact. Additive disparity noise was found to interfere with stereoscopic surface detection for both human and ideal observers. Efficiencies found for stereoscopic surface detection were similar to those found previously for detection of a single step edge in depth (a supposedly easier task).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n First steps with a rideable computer.\n \n \n \n \n\n\n \n Allison, R., Harris, L. R., Jenkin, M., Pintilie, G., Redlick, F., & Zikovitz, D. C.\n\n\n \n\n\n\n In Proceedings IEEE Virtual Reality 2000, of Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048), pages 169-75, New Brunswick, NJ, USA, 2000. IEEE Comput. Soc\n \n\n\n\n
\n\n\n\n \n \n \"First-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison2000169-75,\n\tabstract = {Although technologies such as head-mounted displays and CAVEs can be used to provide large immersive visual displays within small physical spaces, it is difficult to provide virtual environments which are as large physically as they are visually. A fundamental problem is that tracking technologies which work well in a small enclosed environment do not function well over longer distances. In this paper, we describe Trike-a `rideable' computer system which can be used to generate and explore large virtual spaces both visually and physically. This paper describes the hardware and software components of the system and a set of experiments which have been performed to investigate how the different perceptual cues that can be provided with Trike interact within an immersive environment},\n\taddress = {New Brunswick, NJ, USA},\n\tauthor = {Allison, R.S. and Harris, L. R. and Jenkin, M. and Pintilie, G. and Redlick, F. and Zikovitz, D. C.},\n\tbooktitle = {Proceedings IEEE Virtual Reality 2000},\n\tdate-modified = {2011-05-11 13:24:02 -0400},\n\tdoi = {10.1109/VR.2000.840495},\n\tkeywords = {Augmented & Virtual Reality},\n\tpages = {169-75},\n\tpublisher = {IEEE Comput. Soc},\n\tseries = {Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)},\n\ttitle = {First steps with a rideable computer},\n\turl-1 = {http://dx.doi.org/10.1109/VR.2000.840495},\n\tyear = {2000},\n\turl-1 = {https://doi.org/10.1109/VR.2000.840495}}\n\n
\n
\n\n\n
\n Although technologies such as head-mounted displays and CAVEs can be used to provide large immersive visual displays within small physical spaces, it is difficult to provide virtual environments which are as large physically as they are visually. A fundamental problem is that tracking technologies which work well in a small enclosed environment do not function well over longer distances. In this paper, we describe Trike-a `rideable' computer system which can be used to generate and explore large virtual spaces both visually and physically. This paper describes the hardware and software components of the system and a set of experiments which have been performed to investigate how the different perceptual cues that can be provided with Trike interact within an immersive environment\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Vertical fusional range increases with depth separation from competing stimuli.\n \n \n \n\n\n \n Allison, R., Fang, X., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 05 2000.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:2000fi,\n\tauthor = {Allison, R.S. and Fang, X. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:38:33 -0400},\n\tdate-modified = {2011-05-18 16:29:14 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Vergence},\n\tmonth = {05},\n\ttitle = {Vertical fusional range increases with depth separation from competing stimuli},\n\tyear = {2000}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stereopsis with the TRISH 2 robot head.\n \n \n \n\n\n \n Allison, R., & Jenkin, M.\n\n\n \n\n\n\n Proceedings of the IRIS-PRECARN 10th Annual Conference, 6-7, 2000.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:2000vh,\n\tauthor = {Allison, R.S. and Jenkin, M.},\n\tdate-added = {2011-05-09 11:29:23 -0400},\n\tdate-modified = {2012-05-25 21:08:01 +0000},\n\thowpublished = {Proceedings of the IRIS-PRECARN 10th Annual Conference, 6-7},\n\tkeywords = {Stereopsis},\n\ttitle = {Stereopsis with the TRISH 2 robot head},\n\tyear = {2000}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1999\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Effect of field size, head motion, and rotational velocity on roll vection and illusory self-tilt in a tumbling room.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., & Zacher, J. E.\n\n\n \n\n\n\n Perception, 28(3): 299-306. 1999.\n \n\n\n\n
\n\n\n\n \n \n \"Effect-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison1999299-306,\n\tabstract = {The effect of field size, velocity, and visual fixation upon the perception of self-body rotation and tilt was examined in a rotating furnished room. Subjects sat in a stationary chair in the furnished room which could be rotated about the body roll axis. For full-field conditions, complete 360 degrees body rotation (tumbling) was the most common sensation (felt by 80\\% of subjects). Constant tilt or partial tumbling (less than 360 degrees rotation) occurred more frequently with a small field of view (20 deg). The number of subjects who experienced complete tumbling increased with increases in field of view and room velocity (for velocities between 15 and 30 degrees s\\textsuperscript{-1}). The speed of perceived self-rotation relative to room rotation also increased with increasing field of view.},\n\tauthor = {Allison, R.S. and Howard, I. P. and Zacher, J. E.},\n\tdate-modified = {2012-07-02 19:22:01 -0400},\n\tdoi = {10.1068/p2891},\n\tjournal = {Perception},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {3},\n\tpages = {299-306},\n\ttitle = {Effect of field size, head motion, and rotational velocity on roll vection and illusory self-tilt in a tumbling room},\n\turl-1 = {http://dx.doi.org/10.1068/p2891},\n\tvolume = {28},\n\tyear = {1999},\n\turl-1 = {https://doi.org/10.1068/p2891}}\n\n
\n
\n\n\n
\n The effect of field size, velocity, and visual fixation upon the perception of self-body rotation and tilt was examined in a rotating furnished room. Subjects sat in a stationary chair in the furnished room which could be rotated about the body roll axis. For full-field conditions, complete 360 degrees body rotation (tumbling) was the most common sensation (felt by 80% of subjects). Constant tilt or partial tumbling (less than 360 degrees rotation) occurred more frequently with a small field of view (20 deg). The number of subjects who experienced complete tumbling increased with increases in field of view and room velocity (for velocities between 15 and 30 degrees s\\textsuperscript-1). The speed of perceived self-rotation relative to room rotation also increased with increasing field of view.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n When do we use optic flow and when do we use perceived direction to control locomotion?.\n \n \n \n \n\n\n \n Rogers, B., & Allison, R.\n\n\n \n\n\n\n In Perception, volume 28. 1999.\n \n\n\n\n
\n\n\n\n \n \n \"When-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Rogers:1999dg,\n\tabstract = {When do we use optic flow and when do we use perceived direction to control locomotion?\n\nB J Rogers, R S Allison\n\nOptic-flow-field analyses have revealed that there are several sources of information to indicate the point of impact in a visual scene for a moving observer. Visual information alone, however, cannot indicate the heading direction, since heading is defined with respect to the observer. The importance of the distinction between the point of impact and the heading direction was brought out by Rushton et al [1998 Investigative Ophthalmology & Visual Science 39(4) S191] who showed that walking paths are determined primarily by the perceived direction of the target point. In contrast, many studies have shown that the point of impact can be detected with considerable precision by using a variety of different flow-field characteristics, so why doesn't optic flow play a more important role in controlling locomotion? Our results suggest that several factors are important. Locomotor paths are straighter when (i) there is local motion parallax between the intended target and objects at different distances; (ii) there is ground-plane texture and/or path markings; (iii) attention is directed towards the optic-flow cues. In addition, the extent to which we use flow-field information depends on the type of locomotion and the way in which heading direction is controlled by the observer. },\n\tauthor = {Rogers, B.J. and Allison, R.S.},\n\tbooktitle = {Perception},\n\tdate-added = {2011-05-06 17:10:07 -0400},\n\tdate-modified = {2011-05-11 13:07:58 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {When do we use optic flow and when do we use perceived direction to control locomotion?},\n\turl-1 = {http://www.perceptionweb.com/ecvp99/0589.html},\n\tvolume = {28},\n\tyear = {1999}}\n\n
\n
\n\n\n
\n When do we use optic flow and when do we use perceived direction to control locomotion? B J Rogers, R S Allison Optic-flow-field analyses have revealed that there are several sources of information to indicate the point of impact in a visual scene for a moving observer. Visual information alone, however, cannot indicate the heading direction, since heading is defined with respect to the observer. The importance of the distinction between the point of impact and the heading direction was brought out by Rushton et al [1998 Investigative Ophthalmology & Visual Science 39(4) S191] who showed that walking paths are determined primarily by the perceived direction of the target point. In contrast, many studies have shown that the point of impact can be detected with considerable precision by using a variety of different flow-field characteristics, so why doesn't optic flow play a more important role in controlling locomotion? Our results suggest that several factors are important. Locomotor paths are straighter when (i) there is local motion parallax between the intended target and objects at different distances; (ii) there is ground-plane texture and/or path markings; (iii) attention is directed towards the optic-flow cues. In addition, the extent to which we use flow-field information depends on the type of locomotion and the way in which heading direction is controlled by the observer. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of decorrelation and disparity noise on stereoscopic surface.\n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 40. 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:1999eh,\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-added = {2011-05-06 17:08:55 -0400},\n\tdate-modified = {2011-05-18 15:49:38 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Effects of decorrelation and disparity noise on stereoscopic surface},\n\tvolume = {40},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stereoscopic detection and segregation of noisy transparent surfaces.\n \n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n In Perception, volume 28. 1999.\n \n\n\n\n
\n\n\n\n \n \n \"Stereoscopic-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Palmisano:1999br,\n\tabstract = {Stereoscopic detection and segregation of noisy transparent surfaces\n\nS Palmisano, R S Allison, I P Howard\n\nRandom-dot stereograms depicting multiple transparent surfaces, lying at different depths, produce complex problems for the visual system. We investigated the perception of stereoscopic transparency with and without horizontal disparity noise. Stereoscopic displays depicted a surface with vertically oriented sinusoidal depth corrugations lying in front of, coplanar with, or behind a frontal plane surface. Gaussian-distributed disparity noise (standard deviations of 0, 2, 4, or 8 min of arc) was added to dots representing the sinusoid. In different conditions, subjects reported: (1) whether they saw the sinusoid or not (surface detection); (2) whether they saw both the plane and the sinusoid or not (surface segregation). While detection of the sinusoid was quite robust in the presence of substantial disparity noise (eg up to 2 - 4 min of arc), surface segregation degraded quickly. The depth order of the two transparent surfaces was important for surface segregation, which was achieved more readily when the plane was located in front of the sinusoid than when it was beyond or bisecting the sinusoid. The processes involved in segregating transparent surfaces would appear to be particularly susceptible to disparity noise--presumably owing to difficulties in distinguishing disparity discontinuities produced by transparency from those produced by noise.},\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Perception},\n\tdate-added = {2011-05-06 17:06:55 -0400},\n\tdate-modified = {2011-05-18 16:17:10 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Stereoscopic detection and segregation of noisy transparent surfaces},\n\turl-1 = {http://www.perceptionweb.com/ecvp99/0156.html},\n\tvolume = {28},\n\tyear = {1999}}\n\n
\n
\n\n\n
\n Stereoscopic detection and segregation of noisy transparent surfaces S Palmisano, R S Allison, I P Howard Random-dot stereograms depicting multiple transparent surfaces, lying at different depths, produce complex problems for the visual system. We investigated the perception of stereoscopic transparency with and without horizontal disparity noise. Stereoscopic displays depicted a surface with vertically oriented sinusoidal depth corrugations lying in front of, coplanar with, or behind a frontal plane surface. Gaussian-distributed disparity noise (standard deviations of 0, 2, 4, or 8 min of arc) was added to dots representing the sinusoid. In different conditions, subjects reported: (1) whether they saw the sinusoid or not (surface detection); (2) whether they saw both the plane and the sinusoid or not (surface segregation). While detection of the sinusoid was quite robust in the presence of substantial disparity noise (eg up to 2 - 4 min of arc), surface segregation degraded quickly. The depth order of the two transparent surfaces was important for surface segregation, which was achieved more readily when the plane was located in front of the sinusoid than when it was beyond or bisecting the sinusoid. The processes involved in segregating transparent surfaces would appear to be particularly susceptible to disparity noise–presumably owing to difficulties in distinguishing disparity discontinuities produced by transparency from those produced by noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth perception with persisting and dynamic textures.\n \n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science. 1999.\n \n\n\n\n
\n\n\n\n \n \n \"Depth-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1999nq,\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-added = {2011-05-06 17:05:54 -0400},\n\tdate-modified = {2011-05-22 13:48:40 -0400},\n\tdoi = {10.1016/S0042-6989(00)00223-6},\n\tkeywords = {Depth perception},\n\ttitle = {Depth perception with persisting and dynamic textures},\n\turl-1 = {http://dx.doi.org/10.1016/S0042-6989(00)00223-6},\n\tyear = {1999},\n\turl-1 = {https://doi.org/10.1016/S0042-6989(00)00223-6}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Depth perception with a stereoscopic robot head.\n \n \n \n \n\n\n \n Allison, R., & Jenkin, M.\n\n\n \n\n\n\n In Perception, volume 28, pages ECVP Abstract Supplement. 1999.\n \n\n\n\n
\n\n\n\n \n \n \"Depth-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1999ce,\n\tabstract = {Depth perception with a stereoscopic robot head\n\nR Allison, M Jenkin\n\nTRISH-2 is a stereoscopic robot head that arose from the TRISH-1 platform. The robot consists of two computer-controlled CCD cameras acting as eyes. The cameras are mounted on motorised bases and have two extrinsic degrees of freedom. They can be independently panned (azimuth) under computer control. Torsion about the optic axis of each eye is achieved in software. The entire head can also be panned (azimuth) or tilted (elevation). Each camera provides additional optical degrees of freedom under computer control, with independent settings for focus, zoom, aperture, exposure, shutter speed, and video gain. Using TRISH-2, we investigated optimising the optical and rotational parameters for specific stereoscopic visual tasks. These techniques are often analogous to mechanisms proposed for biological vision. For example, Howard and Kaneko (1994 Vision Research 34 2505 - 2517) proposed a modified version of the deformation theory of inclination perception. Vertical shear disparity is averaged over the binocular field and used as the vertical disparity term in computing inclination from deformation disparity. To implement this theory, we use global cyclodisparity to set the torsional position of the eyes and then use deformation disparity to compute inclination. Torsional control of the stereo head improved efficiency of stereoscopic processing and enhanced performance for computing surface structure on inclined surfaces. Other analogies to biological stereoscopic mechanisms were considered as well as algorithms with no biological counterparts. },\n\tauthor = {Allison, R.S. and Jenkin, M.},\n\tbooktitle = {Perception},\n\tdate-added = {2011-05-06 17:03:58 -0400},\n\tdate-modified = {2011-09-12 21:51:56 -0400},\n\tkeywords = {Stereopsis},\n\tpages = {ECVP Abstract Supplement},\n\ttitle = {Depth perception with a stereoscopic robot head},\n\turl-1 = {http://www.perceptionweb.com/ecvp99/0264.html},\n\tvolume = {28},\n\tyear = {1999}}\n\n
\n
\n\n\n
\n Depth perception with a stereoscopic robot head R Allison, M Jenkin TRISH-2 is a stereoscopic robot head that arose from the TRISH-1 platform. The robot consists of two computer-controlled CCD cameras acting as eyes. The cameras are mounted on motorised bases and have two extrinsic degrees of freedom. They can be independently panned (azimuth) under computer control. Torsion about the optic axis of each eye is achieved in software. The entire head can also be panned (azimuth) or tilted (elevation). Each camera provides additional optical degrees of freedom under computer control, with independent settings for focus, zoom, aperture, exposure, shutter speed, and video gain. Using TRISH-2, we investigated optimising the optical and rotational parameters for specific stereoscopic visual tasks. These techniques are often analogous to mechanisms proposed for biological vision. For example, Howard and Kaneko (1994 Vision Research 34 2505 - 2517) proposed a modified version of the deformation theory of inclination perception. Vertical shear disparity is averaged over the binocular field and used as the vertical disparity term in computing inclination from deformation disparity. To implement this theory, we use global cyclodisparity to set the torsional position of the eyes and then use deformation disparity to compute inclination. Torsional control of the stereo head improved efficiency of stereoscopic processing and enhanced performance for computing surface structure on inclined surfaces. Other analogies to biological stereoscopic mechanisms were considered as well as algorithms with no biological counterparts. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of decorrelation and disparity noise on stereoscopic surface.\n \n \n \n\n\n \n Palmisano, S., Allison, R., & Howard, I.\n\n\n \n\n\n\n Poster presented at the annual conference of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida, 05 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Palmisano:1999oh,\n\tauthor = {Palmisano, S.A. and Allison, R.S. and Howard, I.P.},\n\tdate-added = {2011-05-09 17:06:28 -0400},\n\tdate-modified = {2011-05-18 15:49:27 -0400},\n\thowpublished = {Poster presented at the annual conference of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida},\n\tkeywords = {Stereopsis},\n\tmonth = {05},\n\ttitle = {Effects of decorrelation and disparity noise on stereoscopic surface},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Vertical fusional range increases with depth separation from competing stimuli.\n \n \n \n\n\n \n Fang, X., Allison, R., & Howard, I.\n\n\n \n\n\n\n Poster presented at the International Conference on Vision and Attention, York University, North York, Ontario, 06 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Fang:1999sy,\n\tauthor = {Fang, X. and Allison, R.S. and Howard, I.P.},\n\tdate-added = {2011-05-09 16:56:19 -0400},\n\tdate-modified = {2011-05-18 16:29:04 -0400},\n\thowpublished = {Poster presented at the International Conference on Vision and Attention, York University, North York, Ontario},\n\tkeywords = {Vergence},\n\tmonth = {06},\n\ttitle = {Vertical fusional range increases with depth separation from competing stimuli},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Depth perception with persisting and dynamic textures.\n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, 05 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1999rw,\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:41:56 -0400},\n\tdate-modified = {2011-05-22 18:04:22 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology},\n\tkeywords = {Depth perception},\n\tmonth = {05},\n\ttitle = {Depth perception with persisting and dynamic textures},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Depth perception with a stereoscopic robot head.\n \n \n \n\n\n \n Allison, R., & Jenkin, M.\n\n\n \n\n\n\n Poster presented at the European Conference on Visual Perception, Trieste, Italy, 08 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1999ez,\n\tauthor = {Allison, R.S. and Jenkin, M.},\n\tdate-added = {2011-05-09 16:40:44 -0400},\n\tdate-modified = {2011-05-22 13:50:48 -0400},\n\thowpublished = {Poster presented at the European Conference on Visual Perception, Trieste, Italy},\n\tkeywords = {Stereopsis},\n\tmonth = {08},\n\ttitle = {Depth perception with a stereoscopic robot head},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stereo surface detection robust in presence of substantial disparity jitter.\n \n \n \n\n\n \n Palmisano, S., Howard, I., & Allison, R.\n\n\n \n\n\n\n International Conference on Vision and Attention, North York, Ontario, 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Palmisano:1999ye,\n\taddress = {North York, Ontario},\n\tauthor = {Palmisano, S.A. and Howard, I.P. and Allison, R.S.},\n\tbooktitle = {International Conference on Vision and Attention},\n\tdate-added = {2011-05-09 11:30:17 -0400},\n\tdate-modified = {2012-05-25 21:10:32 +0000},\n\thowpublished = {International Conference on Vision and Attention, North York, Ontario},\n\tkeywords = {Stereopsis},\n\ttitle = {Stereo surface detection robust in presence of substantial disparity jitter},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of noisy binocular disparity in stereoscopic virtual-reality systems.\n \n \n \n\n\n \n Howard, I. P., Palmisano, S., Allison, R., & Fang, X.\n\n\n \n\n\n\n Technical Report 99:W7711-7-7393, PWGSC Report, 1999.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{howardpwgsc,\n\tauthor = {Howard, I. P. and Palmisano, S.A. and Allison, R.S. and Fang, X.},\n\tdate-added = {2019-02-03 10:26:47 -0500},\n\tdate-modified = {2019-02-03 10:26:47 -0500},\n\tinstitution = {PWGSC Report},\n\tkeywords = {Stereopsis},\n\tnumber = {99:W7711-7-7393},\n\ttitle = {Effects of noisy binocular disparity in stereoscopic virtual-reality systems},\n\tyear = {1999}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1998\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Post-rotatory nystagmus and turning sensations after active and passive turning.\n \n \n \n \n\n\n \n Howard, I. P., Zacher, J. E., & Allison, R.\n\n\n \n\n\n\n Journal of Vestibular Research: Equilibrium and Orientation, 8(4): 299-312. 1998.\n \n\n\n\n
\n\n\n\n \n \n \"Post-rotatory-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison1998299-312,\n\tabstract = {The authors measured post-rotatory nystagmus and sensations of body rotation in standing subjects brought to rest in the dark after 3 minutes of each of the following conditions: (1) passive turning about the mid-body axis, involving only vestibular stimulation, (2) active turning about the mid-body axis, involving both: vestibular stimulation and motor-proprioceptive activity in the legs, and (3) stepping round while remaining facing in the same direction on the center of a rotating platform with the head held in a stationary holder (apparent turning), involving only motor-proprioceptive activity. The same acceleration-velocity profile was used in all conditions. Post-rotatory nystagmus (slow phase) occurred in the same direction to passive body turning and was reduced in velocity after active body turning, After apparent turning, nystagmus was in the opposite direction as attempted body turning. The authors' theoretical analysis suggests that nystagmus after active turning should conform to the mean of the responses after passive and apparent turning rather than to their sum. The results conform more closely to the mean than to the sum, but with greater weight given to vestibular inputs than to motor-proprioceptive inputs. Post-rotatory sensations of self-rotation were in the expected opposite direction after passive turning and were lower in magnitude after active turning. After apparent turning, sensations of self-rotation were in the same direction as those after attempted turning-an effect known as the antisomatogyral illusion},\n\tauthor = {Howard, I. P. and Zacher, J. E. and Allison, R.S.},\n\tdate-modified = {2012-07-02 19:25:58 -0400},\n\tdoi = {10.1016/S0957-4271(97)00079-7},\n\tjournal = {Journal of Vestibular Research: Equilibrium and Orientation},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {299-312},\n\ttitle = {Post-rotatory nystagmus and turning sensations after active and passive turning},\n\turl-1 = {http://dx.doi.org/10.1016/S0957-4271(97)00079-7},\n\tvolume = {8},\n\tyear = {1998},\n\turl-1 = {https://doi.org/10.1016/S0957-4271(97)00079-7}}\n\n
\n
\n\n\n
\n The authors measured post-rotatory nystagmus and sensations of body rotation in standing subjects brought to rest in the dark after 3 minutes of each of the following conditions: (1) passive turning about the mid-body axis, involving only vestibular stimulation, (2) active turning about the mid-body axis, involving both: vestibular stimulation and motor-proprioceptive activity in the legs, and (3) stepping round while remaining facing in the same direction on the center of a rotating platform with the head held in a stationary holder (apparent turning), involving only motor-proprioceptive activity. The same acceleration-velocity profile was used in all conditions. Post-rotatory nystagmus (slow phase) occurred in the same direction to passive body turning and was reduced in velocity after active body turning, After apparent turning, nystagmus was in the opposite direction as attempted body turning. The authors' theoretical analysis suggests that nystagmus after active turning should conform to the mean of the responses after passive and apparent turning rather than to their sum. The results conform more closely to the mean than to the sum, but with greater weight given to vestibular inputs than to motor-proprioceptive inputs. Post-rotatory sensations of self-rotation were in the expected opposite direction after passive turning and were lower in magnitude after active turning. After apparent turning, sensations of self-rotation were in the same direction as those after attempted turning-an effect known as the antisomatogyral illusion\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Temporal aspects of slant and inclination perception.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., Rogers, B. J., & Bridge, H.\n\n\n \n\n\n\n Perception, 27(11): 1287-304. 1998.\n \n\n\n\n
\n\n\n\n \n \n \"Temporal-1\n  \n \n \n \"Temporal-2\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison19981287-304,\n\tabstract = {Linear transformations (shear or scale transformations) of either horizontal or vertical disparity give rise to the percept of slant or inclination. It has been proposed that the percept of slant induced by vertical size disparity, known as Ogle's induced-size effect, and the analogous induced-shear effect, compensate for scale and shear distortions arising from aniseikonia, eccentric viewing, and cyclodisparity. We hypothesised that these linear transformations of vertical disparity are processed more slowly than equivalent transformations of horizontal disparity (horizontal shear and size disparity). We studied the temporal properties of the stereoscopic slant and inclination percepts that arose when subjects viewed stereograms with various combinations of horizontal and vertical size or shear disparities. We found no evidence to support our hypothesis. There were no clear differences in the build-up of percepts of slant or inclination induced by step changes in horizontal size or shear disparity and those induced by step changes in vertical size or shear disparity. Perceived slant and inclination decreased in a similar manner with increasing temporal frequency for modulations of transformations of both horizontal and vertical disparity. Considerable individual differences were found and several subjects experienced slant reversal, particularly with oscillating stimuli. An interesting finding was that perceived slant induced by modulations of dilation disparity was in the direction of the vertical component. This suggests the vertical size disparity mechanism has a higher temporal bandwidth than the horizontal size disparity mechanism. However, conflicting perspective information may play a dominant role in determining the temporal properties of perceived slant and inclination.},\n\tauthor = {Allison, R.S. and Howard, I. P. and Rogers, B. J. and Bridge, H.},\n\tdate-modified = {2012-07-02 19:22:46 -0400},\n\tdoi = {10.1068/p271287},\n\tjournal = {Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {11},\n\tpages = {1287-304},\n\ttitle = {Temporal aspects of slant and inclination perception},\n\turl-1 = {http://dx.doi.org/10.1068/p271287},\n\turl-2 = {http://dx.doi.org/10.1068/p271287},\n\tvolume = {27},\n\tyear = {1998},\n\turl-1 = {https://doi.org/10.1068/p271287}}\n\n
\n
\n\n\n
\n Linear transformations (shear or scale transformations) of either horizontal or vertical disparity give rise to the percept of slant or inclination. It has been proposed that the percept of slant induced by vertical size disparity, known as Ogle's induced-size effect, and the analogous induced-shear effect, compensate for scale and shear distortions arising from aniseikonia, eccentric viewing, and cyclodisparity. We hypothesised that these linear transformations of vertical disparity are processed more slowly than equivalent transformations of horizontal disparity (horizontal shear and size disparity). We studied the temporal properties of the stereoscopic slant and inclination percepts that arose when subjects viewed stereograms with various combinations of horizontal and vertical size or shear disparities. We found no evidence to support our hypothesis. There were no clear differences in the build-up of percepts of slant or inclination induced by step changes in horizontal size or shear disparity and those induced by step changes in vertical size or shear disparity. Perceived slant and inclination decreased in a similar manner with increasing temporal frequency for modulations of transformations of both horizontal and vertical disparity. Considerable individual differences were found and several subjects experienced slant reversal, particularly with oscillating stimuli. An interesting finding was that perceived slant induced by modulations of dilation disparity was in the direction of the vertical component. This suggests the vertical size disparity mechanism has a higher temporal bandwidth than the horizontal size disparity mechanism. However, conflicting perspective information may play a dominant role in determining the temporal properties of perceived slant and inclination.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Depth from moving uncorrelated random-dot displays.\n \n \n \n\n\n \n Howard, I., Allison, R., & Howard, A.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 39. 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Howard:1998wq,\n\tauthor = {Howard, I.P. and Allison, R.S. and Howard, A.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-added = {2011-05-06 17:17:20 -0400},\n\tdate-modified = {2011-05-18 15:46:16 -0400},\n\tkeywords = {Depth perception},\n\tnumber = {4},\n\tread = {1},\n\ttitle = {Depth from moving uncorrelated random-dot displays},\n\tvolume = {39},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on horizontal vergence.\n \n \n \n\n\n \n Fang, X. P., Howard, I. P., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 39. 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fang:1998iq,\n\tauthor = {Fang, X. P. and Howard, I. P. and Allison, R.S. and Zacher, J. E.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-added = {2011-05-06 17:16:11 -0400},\n\tdate-modified = {2011-05-18 15:51:37 -0400},\n\tkeywords = {Vergence},\n\tnumber = {4},\n\ttitle = {Effects of stimulus size and eccentricity on horizontal vergence},\n\tvolume = {39},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Disparity-perspective interactions in slant perception.\n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 39, pages 4. 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1998mm,\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-added = {2011-05-06 17:15:03 -0400},\n\tdate-modified = {2011-05-22 13:54:43 -0400},\n\tkeywords = {Stereopsis},\n\tpages = {4},\n\ttitle = {Disparity-perspective interactions in slant perception},\n\tvolume = {39},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion in depth can be elicited by dichoptically uncorrelated textures.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., & Howard, A.\n\n\n \n\n\n\n In Perception, volume 27. 1998.\n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n \n \"Motion-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1998yu,\n\tabstract = {Motion in depth can be elicited by dichoptically uncorrelated textures\n\nR S Allison, I P Howard, A Howard\n\nOpposed motion of the stereoscopic half-images of an object evokes a compelling percept of motion in depth. This percept could arise from positional disparity or from interocular differences in motion signals. Correlated dynamic random-dot stereograms have been used to dissociate position and motion disparity. We have taken a different approach using uncorrelated random-dot displays. The stimulus consisted of two random-dot displays, one just above a central fixation point and a second just below the fixation point. One of these served as the test image and the other as the comparison image. The test image was typically binocularly uncorrelated; the comparison image was correlated. The half-images of both displays oscillated horizontally in counterphase. The boundaries of each image were stationary, so that there were no moving deletion - accretion boundaries. Subjects adjusted the oscillation of the comparison display until its perceived velocity matched that of the test display. The effects of variation of dot density, dot lifetime, stimulus velocity, and oscillation frequency were studied. All subjects perceived strong apparent motion in depth in the uncorrelated display. Motion in depth was often accompanied by the appearance of sideways motion. No consistent impression of depth was obtained if the motion was stopped. Thus, dynamic depth can be created by changing disparity in a display with zero mean instantaneous disparity. We propose that the impression of motion in depth arises because of the consistent sign of changing disparity between randomly paired dots.\n\n},\n\tauthor = {Allison, R.S. and Howard, I. P. and Howard, A.},\n\tbooktitle = {Perception},\n\tdate-added = {2011-05-06 17:11:50 -0400},\n\tdate-modified = {2014-09-26 00:17:41 +0000},\n\tkeywords = {Motion in depth},\n\ttitle = {Motion in depth can be elicited by dichoptically uncorrelated textures},\n\turl = {http://percept.eecs.yorku.ca/papers/ecvp 1998.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/ecvp%201998.pdf},\n\tvolume = {27},\n\tyear = {1998},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/ecvp%201998.pdf}}\n\n
\n
\n\n\n
\n Motion in depth can be elicited by dichoptically uncorrelated textures R S Allison, I P Howard, A Howard Opposed motion of the stereoscopic half-images of an object evokes a compelling percept of motion in depth. This percept could arise from positional disparity or from interocular differences in motion signals. Correlated dynamic random-dot stereograms have been used to dissociate position and motion disparity. We have taken a different approach using uncorrelated random-dot displays. The stimulus consisted of two random-dot displays, one just above a central fixation point and a second just below the fixation point. One of these served as the test image and the other as the comparison image. The test image was typically binocularly uncorrelated; the comparison image was correlated. The half-images of both displays oscillated horizontally in counterphase. The boundaries of each image were stationary, so that there were no moving deletion - accretion boundaries. Subjects adjusted the oscillation of the comparison display until its perceived velocity matched that of the test display. The effects of variation of dot density, dot lifetime, stimulus velocity, and oscillation frequency were studied. All subjects perceived strong apparent motion in depth in the uncorrelated display. Motion in depth was often accompanied by the appearance of sideways motion. No consistent impression of depth was obtained if the motion was stopped. Thus, dynamic depth can be created by changing disparity in a display with zero mean instantaneous disparity. We propose that the impression of motion in depth arises because of the consistent sign of changing disparity between randomly paired dots. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of scleral search coil wear on visual function and acuity.\n \n \n \n\n\n \n Zacher, J. E., Irving, B., Allison, R., & Callander, M.\n\n\n \n\n\n\n Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida, 05 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{zacher1998,\n\tauthor = {Zacher, J. E. and Irving, B.I. and Allison, R.S. and Callander, M.E},\n\tdate-added = {2011-05-09 17:08:07 -0400},\n\tdate-modified = {2011-05-18 15:50:44 -0400},\n\thowpublished = {Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {05},\n\ttitle = {Effects of scleral search coil wear on visual function and acuity},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on horizontal vergence.\n \n \n \n\n\n \n Fang, X., Howard, I. P., Allison, R., & Zacher, J.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 04 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Fang:1998ft,\n\tauthor = {Fang, X. and Howard, I. P. and Allison, R.S. and Zacher, J.E.},\n\tdate-added = {2011-05-09 16:57:11 -0400},\n\tdate-modified = {2011-05-18 15:51:18 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Vergence},\n\tmonth = {04},\n\ttitle = {Effects of stimulus size and eccentricity on horizontal vergence},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Disparity-Perspective Interactions in Slant Perception.\n \n \n \n\n\n \n Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 05 1998.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1998lt,\n\tauthor = {Allison, R.S. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:44:51 -0400},\n\tdate-modified = {2011-05-22 18:04:58 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Stereopsis},\n\tmonth = {05},\n\ttitle = {Disparity-Perspective Interactions in Slant Perception},\n\tyear = {1998}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion in depth can be elicited by dichoptically uncorrelated textures.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., & Howard, A.\n\n\n \n\n\n\n Paper presented at the European Conference on Visual Perception, Oxford, UK, 08 1998.\n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n \n \"Motion-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1998kc,\n\tauthor = {Allison, R.S. and Howard, I. P. and Howard, A.},\n\tdate-added = {2011-05-09 16:43:38 -0400},\n\tdate-modified = {2014-09-26 00:16:46 +0000},\n\thowpublished = {Paper presented at the European Conference on Visual Perception, Oxford, UK},\n\tkeywords = {Motion in depth},\n\tmonth = {08},\n\ttitle = {Motion in depth can be elicited by dichoptically uncorrelated textures},\n\turl = {http://percept.eecs.yorku.ca/papers/ecvp 1998.pdf},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/ecvp%201998.pdf},\n\tyear = {1998},\n\turl-1 = {http://percept.eecs.yorku.ca/papers/ecvp%201998.pdf}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1997\n \n \n (4)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The dynamics of vertical vergence.\n \n \n \n \n\n\n \n Howard, I. P., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n Exp Brain Res, 116(1): 153-9. 1997.\n \n\n\n\n
\n\n\n\n \n \n \"The-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison1997153-9,\n\tabstract = {We measured the gain and phase of vertical vergence in response to disjunctive vertical oscillations of dichoptic textured displays. The texture elements were m-scaled to equate visibility over the area of the display and were aperiodic and varied in shape so as to avoid spurious binocular matches. The display subtended 65 degrees and oscillated through peak-to-peak amplitudes from 18 arc min to 4 degrees at frequencies from 0.05 to 2 Hz - larger ranges than used in previous investigations. The gain of vergence was near 1 when the stimulus oscillated at 18 arc min at a frequency of 0.1 Hz or less. As the amplitude of stimulus oscillation increased from 18 arc min to 4 degrees, vergence gain decreased at all frequencies, which is evidence of a nonlinearity. Gain declined with increasing stimulus frequency but was still about 0.5 at 2 Hz for an amplitude of 18 arc min. Phase lag increased from less than 10 degrees at a stimulus frequency of 0.05 Hz to between 100 degrees and 145 degrees at 2 Hz. Overall, the dynamics of vertical vergence resemble the dynamics of horizontal vergence and cyclovergence.},\n\tauthor = {Howard, I. P. and Allison, R.S. and Zacher, J. E.},\n\tdate-modified = {2012-07-02 19:27:24 -0400},\n\tdoi = {10.1007/PL00005735},\n\tjournal = {Exp Brain Res},\n\tkeywords = {Vergence},\n\tnumber = {1},\n\tpages = {153-9},\n\ttitle = {The dynamics of vertical vergence},\n\turl-1 = {http://dx.doi.org/10.1007/PL00005735},\n\tvolume = {116},\n\tyear = {1997},\n\turl-1 = {https://doi.org/10.1007/PL00005735}}\n\n
\n
\n\n\n
\n We measured the gain and phase of vertical vergence in response to disjunctive vertical oscillations of dichoptic textured displays. The texture elements were m-scaled to equate visibility over the area of the display and were aperiodic and varied in shape so as to avoid spurious binocular matches. The display subtended 65 degrees and oscillated through peak-to-peak amplitudes from 18 arc min to 4 degrees at frequencies from 0.05 to 2 Hz - larger ranges than used in previous investigations. The gain of vergence was near 1 when the stimulus oscillated at 18 arc min at a frequency of 0.1 Hz or less. As the amplitude of stimulus oscillation increased from 18 arc min to 4 degrees, vergence gain decreased at all frequencies, which is evidence of a nonlinearity. Gain declined with increasing stimulus frequency but was still about 0.5 at 2 Hz for an amplitude of 18 arc min. Phase lag increased from less than 10 degrees at a stimulus frequency of 0.05 Hz to between 100 degrees and 145 degrees at 2 Hz. Overall, the dynamics of vertical vergence resemble the dynamics of horizontal vergence and cyclovergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vestibulo-ocular reflex deficits to rapid head turns following intratympanic gentamicin instillation.\n \n \n \n \n\n\n \n Allison, R., Eizenman, M., Tomlinson, R. D., Nedzelski, J., & Sharpe, J. A.\n\n\n \n\n\n\n J Vestib Res, 7(5): 369–80. 1997.\n \n\n\n\n
\n\n\n\n \n \n \"Vestibulo-ocular-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison1997369-80,\n\tabstract = {The response of the vestibulo-ocular reflex following unilateral vestibular deafferentation by gentamicin ablation was studied using transient stimuli. The response to these rapid passive head turns showed a strong asymmetry with permanent, reduced gains toward the side of lesion. These gain reductions have large variation (gains of 0.26 to 0.83), which may result from preferential sparing of regularly firing afferent fibers following gentamicin ablation. Based on the size and nature of the nonlinearity, an explanation based on Ewald's second law was discounted.},\n\tauthor = {Allison, R.S. and Eizenman, M. and Tomlinson, R. D. and Nedzelski, J. and Sharpe, J. A.},\n\tdate-modified = {2019-02-03 09:17:35 -0500},\n\tdoi = {10.1016/S0957-4271(96)00162-0},\n\tjournal = {J Vestib Res},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {5},\n\tpages = {369--80},\n\ttitle = {Vestibulo-ocular reflex deficits to rapid head turns following intratympanic gentamicin instillation},\n\turl-1 = {http://dx.doi.org/10.1016/S0957-4271(96)00162-0},\n\tvolume = {7},\n\tyear = {1997},\n\turl-1 = {https://doi.org/10.1016/S0957-4271(96)00162-0}}\n\n
\n
\n\n\n
\n The response of the vestibulo-ocular reflex following unilateral vestibular deafferentation by gentamicin ablation was studied using transient stimuli. The response to these rapid passive head turns showed a strong asymmetry with permanent, reduced gains toward the side of lesion. These gain reductions have large variation (gains of 0.26 to 0.83), which may result from preferential sparing of regularly firing afferent fibers following gentamicin ablation. Based on the size and nature of the nonlinearity, an explanation based on Ewald's second law was discounted.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Visually induced self inversion and levitation.\n \n \n \n\n\n \n Howard, I., Allison, R., Groen, E., Jenkin, H., & Zacher, J.\n\n\n \n\n\n\n In International Conference on Vision and Action. North York, Ontario, 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Howard:1997bl,\n\taddress = {North York, Ontario},\n\tauthor = {Howard, I.P. and Allison, R.S. and Groen, E. and Jenkin, H. and Zacher, J.E.},\n\tbooktitle = {International Conference on Vision and Action},\n\tdate-added = {2011-05-09 11:31:30 -0400},\n\tdate-modified = {2012-07-02 22:45:39 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {Visually induced self inversion and levitation},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of field size on vertical vergence.\n \n \n \n\n\n \n Fang, X. P., Howard, I. P., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, 38(4): S986, volume 38. 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Fang:1997sr,\n\tauthor = {Fang, X. P. and Howard, I. P. and Allison, R.S. and Zacher, J. E.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science, 38(4): S986},\n\tdate-added = {2011-05-09 10:52:02 -0400},\n\tdate-modified = {2011-05-22 13:51:30 -0400},\n\tkeywords = {Vergence},\n\tnumber = {4},\n\ttitle = {Effects of field size on vertical vergence},\n\tvolume = {38},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on vertical vergence.\n \n \n \n\n\n \n Fang, X., Howard, I. P., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 38, pages 4574-4574. 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison19974574-4574,\n\tauthor = {Fang, X. and Howard, I. P. and Allison, R.S. and Zacher, J. E.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 21:55:15 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Vergence},\n\tnumber = {4},\n\tpages = {4574-4574},\n\ttitle = {Effects of stimulus size and eccentricity on vertical vergence},\n\tvolume = {38},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Efficiency of slant and inclination perception as a function of temporal frequency.\n \n \n \n\n\n \n Allison, R., Howard, I. P., Rogers, B. J., & Bridge, H.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 38, pages 4252-4252. 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison19974252-4252,\n\tauthor = {Allison, R.S. and Howard, I. P. and Rogers, B. J. and Bridge, H.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 21:49:57 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Stereopsis},\n\tnumber = {4},\n\tpages = {4252-4252},\n\ttitle = {Efficiency of slant and inclination perception as a function of temporal frequency},\n\tvolume = {38},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Temporal characteristics of stereoscopic slant perception.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., Rogers, B. J., & Bridge, H.\n\n\n \n\n\n\n In Perception, volume 26, pages 1. 1997.\n \n\n\n\n
\n\n\n\n \n \n \"Temporal-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison19971,\n\tauthor = {Allison, R.S. and Howard, I. P. and Rogers, B. J. and Bridge, H.},\n\tbooktitle = {Perception},\n\tdate-modified = {2011-09-12 22:05:43 -0400},\n\tjournal = {Perception},\n\tkeywords = {Stereopsis},\n\tnumber = {10},\n\tpages = {1},\n\ttitle = {Temporal characteristics of stereoscopic slant perception},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/ava 1997 conference.pdf},\n\tvolume = {26},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Tumbling and Levitation.\n \n \n \n\n\n \n Howard, I., Allison, R., Groen, E., Jenkin, H., & Zacher, J.\n\n\n \n\n\n\n Paper presented at International Conference on Vision and Action, York University, North York, Ontario, 06 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Howard:1997fv,\n\tauthor = {Howard, I.P. and Allison, R.S. and Groen, E. and Jenkin, H. and Zacher, J.E.},\n\tdate-added = {2011-05-09 17:01:03 -0400},\n\tdate-modified = {2011-05-18 16:24:38 -0400},\n\thowpublished = {Paper presented at International Conference on Vision and Action, York University, North York, Ontario},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {06},\n\ttitle = {Tumbling and Levitation},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on vertical vergence.\n \n \n \n\n\n \n Fang, X., Howard, I. P., Allison, R., & Zacher, J.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 04 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Fang:1997tf,\n\tauthor = {Fang, X. and Howard, I. P. and Allison, R.S. and Zacher, J.E.},\n\tdate-added = {2011-05-09 16:59:18 -0400},\n\tdate-modified = {2011-05-18 15:52:10 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Vergence},\n\tmonth = {04},\n\ttitle = {Effects of stimulus size and eccentricity on vertical vergence},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on vertical vergence.\n \n \n \n\n\n \n Fang, X., Howard, I., Allison, R., & Zacher, J.\n\n\n \n\n\n\n Poster presented at International Conference on Vision and Action, York University, North York, Ontario, 06 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Fang:1997tn,\n\tauthor = {Fang, X. and Howard, I.P. and Allison, R.S. and Zacher, J.E.},\n\tdate-added = {2011-05-09 16:58:06 -0400},\n\tdate-modified = {2011-05-18 15:52:22 -0400},\n\thowpublished = {Poster presented at International Conference on Vision and Action, York University, North York, Ontario},\n\tkeywords = {Vergence},\n\tmonth = {06},\n\ttitle = {Effects of stimulus size and eccentricity on vertical vergence},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Efficiency of Slant and Inclination Perception as a Function of Temporal Frequency.\n \n \n \n\n\n \n Allison, R., Howard, I. P., Rogers, B., & Bridge, H.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 04 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1997hy,\n\tauthor = {Allison, R.S. and Howard, I. P. and Rogers, B.J. and Bridge, H.},\n\tdate-added = {2011-05-09 16:48:04 -0400},\n\tdate-modified = {2011-05-18 15:53:12 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Stereopsis},\n\tmonth = {04},\n\ttitle = {Efficiency of Slant and Inclination Perception as a Function of Temporal Frequency},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Vertical Disparity, Cue Conflict and Virtual Reality Displays.\n \n \n \n\n\n \n Allison, R.\n\n\n \n\n\n\n Poster presented at the Institute for Space and Terrestrial Science AGM, Toronto, Ontario, 06 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1997cg,\n\tauthor = {Allison, R.S.},\n\tdate-added = {2011-05-09 16:46:52 -0400},\n\tdate-modified = {2011-05-11 13:24:03 -0400},\n\thowpublished = {Poster presented at the Institute for Space and Terrestrial Science AGM, Toronto, Ontario},\n\tkeywords = {Augmented & Virtual Reality},\n\tmonth = {06},\n\ttitle = {Vertical Disparity, Cue Conflict and Virtual Reality Displays},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Temporal Aspects of Stereoscopic Slant Perception.\n \n \n \n \n\n\n \n Allison, R., Howard, I. P., Rogers, B., & Bridge, H\n\n\n \n\n\n\n Poster presented at the AVA Conference on Depth Perception, Surrey, UK, 09 1997.\n \n\n\n\n
\n\n\n\n \n \n \"Temporal-1\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1997wj,\n\tauthor = {Allison, R.S. and Howard, I. P. and Rogers, B.J. and Bridge, H},\n\tdate-added = {2011-05-09 16:45:59 -0400},\n\tdate-modified = {2011-05-18 16:17:37 -0400},\n\thowpublished = {Poster presented at the AVA Conference on Depth Perception, Surrey, UK},\n\tkeywords = {Stereopsis},\n\tmonth = {09},\n\ttitle = {Temporal Aspects of Stereoscopic Slant Perception},\n\turl-1 = {https://percept.eecs.yorku.ca/papers/ava 1997 conference.pdf},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Judgements of the inclination and slant of surfaces in stereoscopic display systems.\n \n \n \n\n\n \n Howard, I. P., Kaneko, H., Pierce, B., Allison, R., & Zacher, J. E.\n\n\n \n\n\n\n Technical Report 97:W7711-4-7217, PWGSC Report, 1997.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{allison1997,\n\tauthor = {Howard, I. P. and Kaneko, H. and Pierce, B. and Allison, R.S. and Zacher, J. E.},\n\tdate-added = {2011-05-09 16:09:20 -0400},\n\tdate-modified = {2011-05-18 16:04:36 -0400},\n\tinstitution = {PWGSC Report},\n\tkeywords = {Stereopsis},\n\tnumber = {97:W7711-4-7217},\n\ttitle = {Judgements of the inclination and slant of surfaces in stereoscopic display systems},\n\tyear = {1997}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1996\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Combined head and eye tracking system for dynamic testing of the vestibular system.\n \n \n \n \n\n\n \n Allison, R., Eizenman, M., & Cheung, B. S. K.\n\n\n \n\n\n\n IEEE Transactions on Biomedical Engineering, 43(11): 1073-1082. 1996.\n \n\n\n\n
\n\n\n\n \n \n \"Combined-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{allison19961073-1082,\n\tabstract = {We present a combined head-eye tracking system suitable for use with free head movement during natural activities. This system provides an integrated head and eye position measurement while allowing for a large range of head movement (approx 1.8 m of head translation is tolerated), Six degrees of freedom of head motion and two degrees of freedom of eye motion are measured by the system, The system was designed to be useful for the evaluation of the vestibule-ocular reflex (VOR), The VOR generates compensatory eye movements in order to stabilize gaze during linear or rotational motion of the head, Current clinical and basic research evaluation of the VOR has used a restricted range of head motion, mainly low-frequency, yaw rotation, An integrated eye-head tracking system such as the one presented here allows the VOR response to linear and angular head motion to be studied in a more physiologically relevant manner. Two examples of the utility of the integrated head and eye tracking system in evaluating the vestibular response to linear and angular motion are presented.},\n\tauthor = {Allison, R.S. and Eizenman, M. and Cheung, B. S. K.},\n\tdate-modified = {2012-07-02 19:28:16 -0400},\n\tdoi = {10.1109/10.541249},\n\tjournal = {IEEE Transactions on Biomedical Engineering},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {11},\n\tpages = {1073-1082},\n\ttitle = {Combined head and eye tracking system for dynamic testing of the vestibular system},\n\turl-1 = {http://dx.doi.org/10.1109/10.541249},\n\tvolume = {43},\n\tyear = {1996},\n\turl-1 = {https://doi.org/10.1109/10.541249}}\n\n
\n
\n\n\n
\n We present a combined head-eye tracking system suitable for use with free head movement during natural activities. This system provides an integrated head and eye position measurement while allowing for a large range of head movement (approx 1.8 m of head translation is tolerated), Six degrees of freedom of head motion and two degrees of freedom of eye motion are measured by the system, The system was designed to be useful for the evaluation of the vestibule-ocular reflex (VOR), The VOR generates compensatory eye movements in order to stabilize gaze during linear or rotational motion of the head, Current clinical and basic research evaluation of the VOR has used a restricted range of head motion, mainly low-frequency, yaw rotation, An integrated eye-head tracking system such as the one presented here allows the VOR response to linear and angular head motion to be studied in a more physiologically relevant manner. Two examples of the utility of the integrated head and eye tracking system in evaluating the vestibular response to linear and angular motion are presented.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n incollection\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n VOR deficits during rapid head turns following intratympanic gentamicin instillation.\n \n \n \n\n\n \n Tomlinson, R., Allison, R., Eizenman, M., Nedzelski, J., & Sharpe, J.\n\n\n \n\n\n\n In Journal of Vestibular Research Supplement, 6(4S): S37, volume 6. 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Tomlinson:1996mb,\n\tabstract = {The response of the vestibulo-ocular reflex following unilateral vestibular deafferentation by gentamicin ablation was studied using transient stimuli. The response to these rapid passive head turns showed a strong asymmetry with permanent, reduced gains toward the side of lesion. These gain reductions have large variation (gains of 0.26 to 0.83), which may result from preferential sparing of regularly firing afferent fibers following gentamicin ablation. Based on the size and nature of the nonlinearity, an explanation based on Ewald's second law was discounted.},\n\tauthor = {Tomlinson, R.D. and Allison, R.S. and Eizenman, M. and Nedzelski, J. and Sharpe, J.A.},\n\tbooktitle = {Journal of Vestibular Research Supplement, 6(4S): S37},\n\tdate-added = {2011-05-09 10:54:58 -0400},\n\tdate-modified = {2011-05-18 16:31:25 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {VOR deficits during rapid head turns following intratympanic gentamicin instillation},\n\tvolume = {6},\n\tyear = {1996}}\n\n
\n
\n\n\n
\n The response of the vestibulo-ocular reflex following unilateral vestibular deafferentation by gentamicin ablation was studied using transient stimuli. The response to these rapid passive head turns showed a strong asymmetry with permanent, reduced gains toward the side of lesion. These gain reductions have large variation (gains of 0.26 to 0.83), which may result from preferential sparing of regularly firing afferent fibers following gentamicin ablation. Based on the size and nature of the nonlinearity, an explanation based on Ewald's second law was discounted.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of visual illusions on eye movements.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 37, pages S527. 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison1996S527,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 21:59:42 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Eye Movements & Tracking},\n\tnumber = {3},\n\tpages = {S527},\n\ttitle = {Effects of visual illusions on eye movements},\n\tvolume = {37},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Dynamic response of the vertical vergence system.\n \n \n \n\n\n \n Allison, R., Howard, I. P., Zacher, J. E., & Bridge, H.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 37, pages S165. 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison1996S165,\n\tauthor = {Allison, R.S. and Howard, I. P. and Zacher, J. E. and Bridge, H.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 22:01:01 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Vergence},\n\tnumber = {3},\n\tpages = {S165},\n\ttitle = {Dynamic response of the vertical vergence system},\n\tvolume = {37},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Effects of Visual Illusions on Eye Movements.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida, 04 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Zacher:1996jv,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tdate-added = {2011-05-09 17:09:05 -0400},\n\tdate-modified = {2011-05-18 15:52:34 -0400},\n\thowpublished = {Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {04},\n\ttitle = {Effects of Visual Illusions on Eye Movements},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n VOR Deficits During Rapid Head Turns Following Intratympanic Gentamicin Instillation.\n \n \n \n\n\n \n Tomlinson, R., Allison, R., Eizenman, M., Nedzelski, J., & Sharpe, J.\n\n\n \n\n\n\n Poster presented at the XIXth Meeting of the Barany Society, Sydney, Australia, 08 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Tomlinson:1996wl,\n\tauthor = {Tomlinson, R.D. and Allison, R.S. and Eizenman, M. and Nedzelski, J. and Sharpe, J.A.},\n\tdate-added = {2011-05-09 17:07:12 -0400},\n\tdate-modified = {2011-05-18 16:31:10 -0400},\n\thowpublished = {Poster presented at the XIXth Meeting of the Barany Society, Sydney, Australia},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {08},\n\ttitle = {VOR Deficits During Rapid Head Turns Following Intratympanic Gentamicin Instillation},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The role of visual cues on spatial orientation in weightlessnes Neurolab E136.\n \n \n \n\n\n \n Groen, E., Jenkin, H., Allison, R., Zacher, J., & Howard, I.\n\n\n \n\n\n\n Poster presented at the Institute for Space and Terrestrial Science Annual General Meeting, Toronto, Ontario, 09 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Groen:1996vo,\n\tauthor = {Groen, E. and Jenkin, H. and Allison, R.S. and Zacher, J.E. and Howard, I.P.},\n\tdate-added = {2011-05-09 17:04:44 -0400},\n\tdate-modified = {2011-05-22 18:06:04 -0400},\n\thowpublished = {Poster presented at the Institute for Space and Terrestrial Science Annual General Meeting, Toronto, Ontario},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {09},\n\ttitle = {The role of visual cues on spatial orientation in weightlessnes Neurolab E136},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Role of visual cues in microgravity spatial orientation.\n \n \n \n\n\n \n Jenkin, H., Zacher, J., Allison, R., Groen, E., & Howard, I.\n\n\n \n\n\n\n Poster presented at the Institute for Space and Terrestrial Science AGM, Toronto, Ontario, 10 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Jenkin:1996tp,\n\tauthor = {Jenkin, H. and Zacher, J.E. and Allison, R.S. and Groen, E. and Howard, I.P.},\n\tdate-added = {2011-05-09 17:03:41 -0400},\n\tdate-modified = {2011-05-18 16:14:28 -0400},\n\thowpublished = {Poster presented at the Institute for Space and Terrestrial Science AGM, Toronto, Ontario},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {10},\n\ttitle = {Role of visual cues in microgravity spatial orientation},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Effects of stimulus size and eccentricity on vertical vergence.\n \n \n \n\n\n \n Fang, X., Allison, R., Zacher, J., & Howard, I.\n\n\n \n\n\n\n Poster presented at the Institute for Space and Terrestrial Science Annual General Meeting , Toronto, Ontario, 09 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Fang:1996pt,\n\tauthor = {Fang, X. and Allison, R.S. and Zacher, J.E. and Howard, I.P.},\n\tdate-added = {2011-05-09 17:00:17 -0400},\n\tdate-modified = {2011-05-18 15:51:55 -0400},\n\thowpublished = {Poster presented at the Institute for Space and Terrestrial Science Annual General Meeting , Toronto, Ontario},\n\tkeywords = {Vergence},\n\tmonth = {09},\n\ttitle = {Effects of stimulus size and eccentricity on vertical vergence},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Dynamic Response of the Vertical Vergence System.\n \n \n \n\n\n \n Allison, R., Zacher, J. E., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 04 1996.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1996ky,\n\tauthor = {Allison, R.S. and Zacher, J. E. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:50:11 -0400},\n\tdate-modified = {2011-05-18 15:48:31 -0400},\n\thowpublished = {Poster presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Vergence},\n\tmonth = {04},\n\ttitle = {Dynamic Response of the Vertical Vergence System},\n\tyear = {1996}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1995\n \n \n (3)\n \n \n
\n
\n \n \n
\n
\n  \n incollection\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Illusory self-tilt and roll-vection in a tumbling room.\n \n \n \n\n\n \n Allison, R., Zacher, J. E., & Howard, I. P.\n\n\n \n\n\n\n In ICVC Abstracts. International Conference on Visual Coding, volume 95, pages 1. 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1995sa,\n\tauthor = {Allison, R.S. and Zacher, J. E. and Howard, I. P.},\n\tbooktitle = {ICVC Abstracts. International Conference on Visual Coding},\n\tdate-added = {2011-05-09 11:34:19 -0400},\n\tdate-modified = {2012-07-02 22:43:52 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {2},\n\tpages = {1},\n\ttitle = {Illusory self-tilt and roll-vection in a tumbling room},\n\tvolume = {95},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stereoscopic vision in flight simulators.\n \n \n \n\n\n \n Allison, R., & Howard, I.\n\n\n \n\n\n\n In Institute for Space and Terrestrial Science Showcase 1995: Networking for Profit. Toronto, Canada, 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1995xv,\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S. and Howard, I.P.},\n\tbooktitle = {Institute for Space and Terrestrial Science Showcase 1995: Networking for Profit},\n\tdate-added = {2011-05-09 11:32:59 -0400},\n\tdate-modified = {2012-07-02 22:45:34 -0400},\n\tkeywords = {Stereopsis},\n\ttitle = {Stereoscopic vision in flight simulators},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The effect of active movement on postrotary nystagmus and illusory body rotation.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, 36(4):S685, volume 36. 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Zacher:1995sz,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science, 36(4):S685},\n\tdate-added = {2011-05-09 10:49:24 -0400},\n\tdate-modified = {2011-05-18 16:18:40 -0400},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\ttitle = {The effect of active movement on postrotary nystagmus and illusory body rotation},\n\tvolume = {36},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Effect of Field Size on Roll Vection in a Tumbling Room.\n \n \n \n\n\n \n Allison, R., Zacher, J. E., Howard, I. P., & Oman, C. M.\n\n\n \n\n\n\n In Investigative Ophthalmology and Visual Science, volume 36, pages S829-S829. 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{allison1995S829-S829,\n\tauthor = {Allison, R.S. and Zacher, J. E. and Howard, I. P. and Oman, C. M.},\n\tbooktitle = {Investigative Ophthalmology and Visual Science},\n\tdate-modified = {2011-09-12 21:55:01 -0400},\n\tjournal = {Investigative Ophthalmology and Visual Science},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {4},\n\tpages = {S829-S829},\n\ttitle = {The Effect of Field Size on Roll Vection in a Tumbling Room},\n\tvolume = {36},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n misc\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The integration of postrotatory nystagmus and illusionary body rotation.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the International Conference on Visual Coding, (ICVC), North York, Ontario, 06 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Zacher:1995zi,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tdate-added = {2011-05-09 17:12:03 -0400},\n\tdate-modified = {2011-05-18 16:20:55 -0400},\n\thowpublished = {Poster presented at the International Conference on Visual Coding, (ICVC), North York, Ontario},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {06},\n\ttitle = {The integration of postrotatory nystagmus and illusionary body rotation},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The effect of active movement on postrotary nystagmus and illusory body rotation.\n \n \n \n\n\n \n Zacher, J. E., Allison, R., & Howard, I. P.\n\n\n \n\n\n\n Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida, 05 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Zacher:1995tz,\n\tauthor = {Zacher, J. E. and Allison, R.S. and Howard, I. P.},\n\tdate-added = {2011-05-09 17:09:54 -0400},\n\tdate-modified = {2011-05-18 16:18:30 -0400},\n\thowpublished = {Poster presented at the annual meeting of the Association for Research in Vision and Ophthalmology, (ARVO), Fort Lauderdale, Florida},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {05},\n\ttitle = {The effect of active movement on postrotary nystagmus and illusory body rotation},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The effect of field size on roll vection in a tumbling room.\n \n \n \n\n\n \n Allison, R., Zacher, J. E., & Howard, I. P.\n\n\n \n\n\n\n Paper presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 05 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{Allison:1995pc,\n\tauthor = {Allison, R.S. and Zacher, J. E. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:52:10 -0400},\n\tdate-modified = {2011-05-22 18:06:40 -0400},\n\thowpublished = {Paper presented at the annual conference for the Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tmonth = {05},\n\ttitle = {The effect of field size on roll vection in a tumbling room},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n techreport\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Report on the effect of field size, head motion, and rotational velocity on roll vection and illusory self-tilt in a tumbling room.\n \n \n \n\n\n \n Allison, R., Zacher, J., & Howard, I. P.\n\n\n \n\n\n\n Technical Report 20040141722, NASA, 1995.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@techreport{Allison:1995fl,\n\tauthor = {Allison, R.S. and Zacher, J. and Howard, I. P.},\n\tdate-added = {2011-05-09 16:35:15 -0400},\n\tdate-modified = {2011-05-18 16:12:38 -0400},\n\tinstitution = {NASA},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\tnumber = {20040141722},\n\ttitle = {Report on the effect of field size, head motion, and rotational velocity on roll vection and illusory self-tilt in a tumbling room},\n\tyear = {1995}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 1994\n \n \n (2)\n \n \n
\n
\n \n \n
\n
\n  \n incollection\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n High frequency, high acceleration testing of the VOR.\n \n \n \n\n\n \n Allison, R., Eizenman, M., Tomlinson, R., & Sharpe, J.\n\n\n \n\n\n\n In Advances in Biomedical Engineering and Biosystems Science. Toronto, Canada, 06 1994.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Allison:1994qe,\n\tabstract = {Rotational testing of the vestibulo-ocular reflex (VOR) does not always correlate with patients' symptoms. One possible reason is that conventional testing is performed at low frequencies and relatively low velocities thai do not correspond to the high frequency perrurbations encountered during locomotion. We present a combined head-eye tracking system suitable for use with free head movement during natural activities. The system was used to study the response to rapid passive head turns in nonnal subjects and patients with unilateral lesions. The patients have marked, persistent VOR deficits for rotation toward the side of lesion. The implications of these results on the organization of the nonnal VOR and the process of VOR compensation are discussed.},\n\taddress = {Toronto, Canada},\n\tauthor = {Allison, R.S. and Eizenman, M. and Tomlinson, R.D. and Sharpe, J.A.},\n\tbooktitle = {Advances in Biomedical Engineering and Biosystems Science},\n\tdate-added = {2011-05-09 11:36:10 -0400},\n\tdate-modified = {2014-02-03 14:37:56 +0000},\n\tkeywords = {Eye Movements & Tracking},\n\tmonth = {06},\n\ttitle = {High frequency, high acceleration testing of the VOR},\n\tyear = {1994}}\n\n
\n
\n\n\n
\n Rotational testing of the vestibulo-ocular reflex (VOR) does not always correlate with patients' symptoms. One possible reason is that conventional testing is performed at low frequencies and relatively low velocities thai do not correspond to the high frequency perrurbations encountered during locomotion. We present a combined head-eye tracking system suitable for use with free head movement during natural activities. The system was used to study the response to rapid passive head turns in nonnal subjects and patients with unilateral lesions. The patients have marked, persistent VOR deficits for rotation toward the side of lesion. The implications of these results on the organization of the nonnal VOR and the process of VOR compensation are discussed.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Head and Eye Tracking for Study of the VOR During Natural Head Motion.\n \n \n \n \n\n\n \n Allison, R., Eizenman, M., Tomlinson, R. D., Sharpe, J., Frecker, R. C., Anderson, J., & McIlmoy, L.\n\n\n \n\n\n\n In Sheppard, N. F., Eden, M., & Kantor, G., editor(s), Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society - Engineering Advances: New Opportunities for Biomedical Engineers, Pts 1 and 2, pages 267-268, New York, 1994. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Head-1\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{allison1994267-268,\n\tabstract = {Rotational testing of the vestibulo-ocular reflex (VOR) does not always correlate with patients' symptoms. One possible reason is that conventional testing is performed at low frequencies and relatively low velocities that do not correspond to the high frequency perturbations encountered during locomotion. The authors present a combined head-eye tracking system suitable for use with free head movement during natural activities. The system was used to study the response to rapid passive head turns in normal subjects and patients with unilateral lesions. The patients have marked, persistent VOR deficits for rotation toward the side of lesion. The implications of these results on the organization of the normal VOR and the process of VOR compensation are discussed},\n\taddress = {New York},\n\tauthor = {Allison, R.S. and Eizenman, M. and Tomlinson, R. D. and Sharpe, J. and Frecker, R. C. and Anderson, J. and McIlmoy, L.},\n\tbooktitle = {Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society - Engineering Advances: New Opportunities for Biomedical Engineers, Pts 1 and 2},\n\tdate-modified = {2014-02-03 14:36:42 +0000},\n\tdoi = {10.1109/IEMBS.1994.412052},\n\teditor = {Sheppard, N. F. and Eden, M. and Kantor, G.},\n\tkeywords = {Eye Movements & Tracking},\n\tpages = {267-268},\n\tpublisher = {IEEE},\n\ttitle = {Head and Eye Tracking for Study of the VOR During Natural Head Motion},\n\turl-1 = {http://dx.doi.org/10.1109/IEMBS.1994.412052},\n\tyear = {1994},\n\turl-1 = {https://doi.org/10.1109/IEMBS.1994.412052}}\n\n
\n
\n\n\n
\n Rotational testing of the vestibulo-ocular reflex (VOR) does not always correlate with patients' symptoms. One possible reason is that conventional testing is performed at low frequencies and relatively low velocities that do not correspond to the high frequency perturbations encountered during locomotion. The authors present a combined head-eye tracking system suitable for use with free head movement during natural activities. The system was used to study the response to rapid passive head turns in normal subjects and patients with unilateral lesions. The patients have marked, persistent VOR deficits for rotation toward the side of lesion. The implications of these results on the organization of the normal VOR and the process of VOR compensation are discussed\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n in press\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n article\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The Effects of Long-Term Exposure to Microgravity and Body Orientation Relative to Gravity on Perceived Traveled Distance.\n \n \n \n\n\n \n Harris, L., Jörges, B., Bury, N., McManus, M., Bensal, A., Allison, R., & Jenkin, M.\n\n\n \n\n\n\n NPG Microgravity. in press.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{Harris:2024aa,\n\tauthor = {Laurence Harris and Bj{\\"o}rn J{\\"o}rges and Nils Bury and Meaghan McManus and Ambika Bensal and Robert Allison and Michael Jenkin},\n\tdate-added = {2024-03-04 22:15:18 -0500},\n\tdate-modified = {2024-03-04 22:16:56 -0500},\n\tjournal = {NPG Microgravity},\n\tkeywords = {Optic flow & Self Motion (also Locomotion & Aviation)},\n\ttitle = {The Effects of Long-Term Exposure to Microgravity and Body Orientation Relative to Gravity on Perceived Traveled Distance},\n\tyear = {in press}}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);