Scalable Probabilistic Modeling of Working Memory Performance. Rojo, M., Maddula, P., Fu, D., Guo, M., Zheng, E., Grande, Á., Pahor, A., Jaeggi, S., Seitz, A., Goffney, I., Ramani, G., Gardner, J. R., & Barbour, D. November, 2023. Publisher: OSF
Paper doi abstract bibtex A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.
@article{rojo_scalable_2023,
title = {Scalable {Probabilistic} {Modeling} of {Working} {Memory} {Performance}},
url = {https://osf.io/nq6yg},
doi = {10.31234/osf.io/nq6yg},
abstract = {A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30\% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.},
language = {en-us},
urldate = {2024-01-21},
author = {Rojo, Mariluz and Maddula, Pranav and Fu, Dan and Guo, Michael and Zheng, Ethan and Grande, Álvaro and Pahor, Anja and Jaeggi, Susanne and Seitz, Aaron and Goffney, Imani and Ramani, Geetha and Gardner, Jacob R. and Barbour, Dennis},
month = nov,
year = {2023},
note = {Publisher: OSF},
}
Downloads: 0
{"_id":"K3n2Gm6sLndCypSbu","bibbaseid":"rojo-maddula-fu-guo-zheng-grande-pahor-jaeggi-etal-scalableprobabilisticmodelingofworkingmemoryperformance-2023","author_short":["Rojo, M.","Maddula, P.","Fu, D.","Guo, M.","Zheng, E.","Grande, Á.","Pahor, A.","Jaeggi, S.","Seitz, A.","Goffney, I.","Ramani, G.","Gardner, J. R.","Barbour, D."],"bibdata":{"bibtype":"article","type":"article","title":"Scalable Probabilistic Modeling of Working Memory Performance","url":"https://osf.io/nq6yg","doi":"10.31234/osf.io/nq6yg","abstract":"A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.","language":"en-us","urldate":"2024-01-21","author":[{"propositions":[],"lastnames":["Rojo"],"firstnames":["Mariluz"],"suffixes":[]},{"propositions":[],"lastnames":["Maddula"],"firstnames":["Pranav"],"suffixes":[]},{"propositions":[],"lastnames":["Fu"],"firstnames":["Dan"],"suffixes":[]},{"propositions":[],"lastnames":["Guo"],"firstnames":["Michael"],"suffixes":[]},{"propositions":[],"lastnames":["Zheng"],"firstnames":["Ethan"],"suffixes":[]},{"propositions":[],"lastnames":["Grande"],"firstnames":["Álvaro"],"suffixes":[]},{"propositions":[],"lastnames":["Pahor"],"firstnames":["Anja"],"suffixes":[]},{"propositions":[],"lastnames":["Jaeggi"],"firstnames":["Susanne"],"suffixes":[]},{"propositions":[],"lastnames":["Seitz"],"firstnames":["Aaron"],"suffixes":[]},{"propositions":[],"lastnames":["Goffney"],"firstnames":["Imani"],"suffixes":[]},{"propositions":[],"lastnames":["Ramani"],"firstnames":["Geetha"],"suffixes":[]},{"propositions":[],"lastnames":["Gardner"],"firstnames":["Jacob","R."],"suffixes":[]},{"propositions":[],"lastnames":["Barbour"],"firstnames":["Dennis"],"suffixes":[]}],"month":"November","year":"2023","note":"Publisher: OSF","bibtex":"@article{rojo_scalable_2023,\n\ttitle = {Scalable {Probabilistic} {Modeling} of {Working} {Memory} {Performance}},\n\turl = {https://osf.io/nq6yg},\n\tdoi = {10.31234/osf.io/nq6yg},\n\tabstract = {A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30\\% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.},\n\tlanguage = {en-us},\n\turldate = {2024-01-21},\n\tauthor = {Rojo, Mariluz and Maddula, Pranav and Fu, Dan and Guo, Michael and Zheng, Ethan and Grande, Álvaro and Pahor, Anja and Jaeggi, Susanne and Seitz, Aaron and Goffney, Imani and Ramani, Geetha and Gardner, Jacob R. and Barbour, Dennis},\n\tmonth = nov,\n\tyear = {2023},\n\tnote = {Publisher: OSF},\n}\n\n","author_short":["Rojo, M.","Maddula, P.","Fu, D.","Guo, M.","Zheng, E.","Grande, Á.","Pahor, A.","Jaeggi, S.","Seitz, A.","Goffney, I.","Ramani, G.","Gardner, J. R.","Barbour, D."],"key":"rojo_scalable_2023","id":"rojo_scalable_2023","bibbaseid":"rojo-maddula-fu-guo-zheng-grande-pahor-jaeggi-etal-scalableprobabilisticmodelingofworkingmemoryperformance-2023","role":"author","urls":{"Paper":"https://osf.io/nq6yg"},"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://api.zotero.org/groups/2607247/items?key=t9TdC3rOErveu1tiJ6HpRRuh&format=bibtex&limit=100","dataSources":["Fo9Jzk7YWDoEzAMys","bcA57xRhFo7t7CYWY"],"keywords":[],"search_terms":["scalable","probabilistic","modeling","working","memory","performance","rojo","maddula","fu","guo","zheng","grande","pahor","jaeggi","seitz","goffney","ramani","gardner","barbour"],"title":"Scalable Probabilistic Modeling of Working Memory Performance","year":2023}