Immersive virtual games: winners for deep cognitive assessment. Marticorena, D. C., Lu, Z., Wissmann, C., Agarwal, Y., Garrison, D., Zempel, J. M., & Barbour, D. L. February, 2025. arXiv:2502.10290 [cs]
Immersive virtual games: winners for deep cognitive assessment [link]Paper  doi  abstract   bibtex   
Studies of human cognition often rely on brief, highly controlled tasks that emphasize group-level effects but poorly capture the rich variability within and between individuals. Here, we present PixelDOPA, a suite of minigames designed to overcome these limitations by embedding classic cognitive task paradigms in an immersive 3D virtual environment with continuous behavior logging. Four minigames explore overlapping constructs such as processing speed, rule shifting, inhibitory control and working memory, comparing against established NIH Toolbox tasks. Across a clinical sample of 60 participants collected outside a controlled laboratory setting, we found significant, large correlations (r = 0.50-0.93) between the PixelDOPA tasks and NIH Toolbox counterparts, despite differences in stimuli and task structures. Process-informed metrics (e.g., gaze-based response times derived from continuous logging) substantially improved both task convergence and data quality. Test-retest analyses revealed high reliability (ICC = 0.50-0.92) for all minigames. Beyond endpoint metrics, movement and gaze trajectories revealed stable, idiosyncratic profiles of gameplay strategy, with unsupervised clustering distinguishing subjects by their navigational and viewing behaviors. These trajectory-based features showed lower within-person variability than between-person variability, facilitating player identification across repeated sessions. Game-based tasks can therefore retain the psychometric rigor of standard cognitive assessments while providing new insights into dynamic behaviors. By leveraging a highly engaging, fully customizable game engine, we show that comprehensive behavioral tracking boosts the power to detect individual differences–offering a path toward cognitive measures that are both robust and ecologically valid, even in less-than-ideal settings for data collection.
@misc{marticorena_immersive_2025,
	title = {Immersive virtual games: winners for deep cognitive assessment},
	shorttitle = {Immersive virtual games},
	url = {http://arxiv.org/abs/2502.10290},
	doi = {10.48550/arXiv.2502.10290},
	abstract = {Studies of human cognition often rely on brief, highly controlled tasks that emphasize group-level effects but poorly capture the rich variability within and between individuals. Here, we present PixelDOPA, a suite of minigames designed to overcome these limitations by embedding classic cognitive task paradigms in an immersive 3D virtual environment with continuous behavior logging. Four minigames explore overlapping constructs such as processing speed, rule shifting, inhibitory control and working memory, comparing against established NIH Toolbox tasks. Across a clinical sample of 60 participants collected outside a controlled laboratory setting, we found significant, large correlations (r = 0.50-0.93) between the PixelDOPA tasks and NIH Toolbox counterparts, despite differences in stimuli and task structures. Process-informed metrics (e.g., gaze-based response times derived from continuous logging) substantially improved both task convergence and data quality. Test-retest analyses revealed high reliability (ICC = 0.50-0.92) for all minigames. Beyond endpoint metrics, movement and gaze trajectories revealed stable, idiosyncratic profiles of gameplay strategy, with unsupervised clustering distinguishing subjects by their navigational and viewing behaviors. These trajectory-based features showed lower within-person variability than between-person variability, facilitating player identification across repeated sessions. Game-based tasks can therefore retain the psychometric rigor of standard cognitive assessments while providing new insights into dynamic behaviors. By leveraging a highly engaging, fully customizable game engine, we show that comprehensive behavioral tracking boosts the power to detect individual differences--offering a path toward cognitive measures that are both robust and ecologically valid, even in less-than-ideal settings for data collection.},
	urldate = {2025-02-26},
	publisher = {arXiv},
	author = {Marticorena, Dom CP and Lu, Zeyu and Wissmann, Chris and Agarwal, Yash and Garrison, David and Zempel, John M. and Barbour, Dennis L.},
	month = feb,
	year = {2025},
	note = {arXiv:2502.10290 [cs]},
	keywords = {Computer Science - Human-Computer Interaction},
}

Downloads: 0