var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fgroups%2F2607247%2Fitems%3Fkey%3Dt9TdC3rOErveu1tiJ6HpRRuh%26format%3Dbibtex%26limit%3D100&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fgroups%2F2607247%2Fitems%3Fkey%3Dt9TdC3rOErveu1tiJ6HpRRuh%26format%3Dbibtex%26limit%3D100&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fgroups%2F2607247%2Fitems%3Fkey%3Dt9TdC3rOErveu1tiJ6HpRRuh%26format%3Dbibtex%26limit%3D100&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Contrast response function estimation with nonparametric Bayesian active learning.\n \n \n \n\n\n \n Marticorena, D. C. P.; Wong, Q. W.; Browning, J.; Wilbur, K.; Jayakumar, S.; Davey, P. G.; Seitz, A. R.; Gardner, J. R.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Vision, 24(1): 6. January 2024.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{marticorena_contrast_2024,\n\ttitle = {Contrast response function estimation with nonparametric {Bayesian} active learning},\n\tvolume = {24},\n\tissn = {1534-7362},\n\tdoi = {10.1167/jov.24.1.6},\n\tabstract = {Multidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast sensitivity functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function. Because estimation can be impractically long, current clinical workflows must make compromises such as limited sampling across spatial frequency or strong assumptions on CSF shape. This article describes the development of the machine learning contrast response function (MLCRF) estimator, which quantifies the expected probability of success in performing a contrast detection or discrimination task. A machine learning CSF can then be derived from the MLCRF. Using simulated eyes created from canonical CSF curves and actual human contrast response data, the accuracy and efficiency of the machine learning contrast sensitivity function (MLCSF) was evaluated to determine its potential utility for research and clinical applications. With stimuli selected randomly, the MLCSF estimator converged slowly toward ground truth. With optimal stimulus selection via Bayesian active learning, convergence was nearly an order of magnitude faster, requiring only tens of stimuli to achieve reasonable estimates. Inclusion of an informative prior provided no consistent advantage to the estimator as configured. MLCSF achieved efficiencies on par with quickCSF, a conventional parametric estimator, but with systematically higher accuracy. Because MLCSF design allows accuracy to be traded off against efficiency, it should be explored further to uncover its full potential.},\n\tlanguage = {eng},\n\tnumber = {1},\n\tjournal = {Journal of Vision},\n\tauthor = {Marticorena, Dom C. P. and Wong, Quinn Wai and Browning, Jake and Wilbur, Ken and Jayakumar, Samyukta and Davey, Pinakin Gunvant and Seitz, Aaron R. and Gardner, Jacob R. and Barbour, Dennis L.},\n\tmonth = jan,\n\tyear = {2024},\n\tpmid = {38197739},\n\tkeywords = {Bayes Theorem, Contrast Sensitivity, Eye, Humans, Machine Learning, Pentaerythritol Tetranitrate},\n\tpages = {6},\n}\n\n
\n
\n\n\n
\n Multidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast sensitivity functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function. Because estimation can be impractically long, current clinical workflows must make compromises such as limited sampling across spatial frequency or strong assumptions on CSF shape. This article describes the development of the machine learning contrast response function (MLCRF) estimator, which quantifies the expected probability of success in performing a contrast detection or discrimination task. A machine learning CSF can then be derived from the MLCRF. Using simulated eyes created from canonical CSF curves and actual human contrast response data, the accuracy and efficiency of the machine learning contrast sensitivity function (MLCSF) was evaluated to determine its potential utility for research and clinical applications. With stimuli selected randomly, the MLCSF estimator converged slowly toward ground truth. With optimal stimulus selection via Bayesian active learning, convergence was nearly an order of magnitude faster, requiring only tens of stimuli to achieve reasonable estimates. Inclusion of an informative prior provided no consistent advantage to the estimator as configured. MLCSF achieved efficiencies on par with quickCSF, a conventional parametric estimator, but with systematically higher accuracy. Because MLCSF design allows accuracy to be traded off against efficiency, it should be explored further to uncover its full potential.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Unicorn, Hare, or Tortoise? Using Machine Learning to Predict Working Memory Training Performance.\n \n \n \n\n\n \n Feng, Y.; Pahor, A.; Seitz, A. R.; Barbour, D. L.; and Jaeggi, S. M.\n\n\n \n\n\n\n Journal of Cognition, 6(1): 53. 2023.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{feng_unicorn_2023,\n\ttitle = {Unicorn, {Hare}, or {Tortoise}? {Using} {Machine} {Learning} to {Predict} {Working} {Memory} {Training} {Performance}},\n\tvolume = {6},\n\tissn = {2514-4820},\n\tshorttitle = {Unicorn, {Hare}, or {Tortoise}?},\n\tdoi = {10.5334/joc.319},\n\tabstract = {People differ considerably in the extent to which they benefit from working memory (WM) training. Although there is increasing research focusing on individual differences associated with WM training outcomes, we still lack an understanding of which specific individual differences, and in what combination, contribute to inter-individual variations in training trajectories. In the current study, 568 undergraduates completed one of several N-back intervention variants over the course of two weeks. Participants' training trajectories were clustered into three distinct training patterns (high performers, intermediate performers, and low performers). We applied machine-learning algorithms to train a binary tree model to predict individuals' training patterns relying on several individual difference variables that have been identified as relevant in previous literature. These individual difference variables included pre-existing cognitive abilities, personality characteristics, motivational factors, video game experience, health status, bilingualism, and socioeconomic status. We found that our classification model showed good predictive power in distinguishing between high performers and relatively lower performers. Furthermore, we found that openness and pre-existing WM capacity to be the two most important factors in distinguishing between high and low performers. However, among low performers, openness and video game background were the most significant predictors of their learning persistence. In conclusion, it is possible to predict individual training performance using participant characteristics before training, which could inform the development of personalized interventions.},\n\tlanguage = {eng},\n\tnumber = {1},\n\tjournal = {Journal of Cognition},\n\tauthor = {Feng, Yi and Pahor, Anja and Seitz, Aaron R. and Barbour, Dennis L. and Jaeggi, Susanne M.},\n\tyear = {2023},\n\tpmid = {37692193},\n\tpmcid = {PMC10487130},\n\tkeywords = {Individual differences, Machine learning, Working memory},\n\tpages = {53},\n}\n\n
\n
\n\n\n
\n People differ considerably in the extent to which they benefit from working memory (WM) training. Although there is increasing research focusing on individual differences associated with WM training outcomes, we still lack an understanding of which specific individual differences, and in what combination, contribute to inter-individual variations in training trajectories. In the current study, 568 undergraduates completed one of several N-back intervention variants over the course of two weeks. Participants' training trajectories were clustered into three distinct training patterns (high performers, intermediate performers, and low performers). We applied machine-learning algorithms to train a binary tree model to predict individuals' training patterns relying on several individual difference variables that have been identified as relevant in previous literature. These individual difference variables included pre-existing cognitive abilities, personality characteristics, motivational factors, video game experience, health status, bilingualism, and socioeconomic status. We found that our classification model showed good predictive power in distinguishing between high performers and relatively lower performers. Furthermore, we found that openness and pre-existing WM capacity to be the two most important factors in distinguishing between high and low performers. However, among low performers, openness and video game background were the most significant predictors of their learning persistence. In conclusion, it is possible to predict individual training performance using participant characteristics before training, which could inform the development of personalized interventions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Cognitive deficits and altered functional brain network organization in pediatric brain tumor patients.\n \n \n \n\n\n \n Seitzman, B. A.; Anandarajah, H.; Dworetsky, A.; McMichael, A.; Coalson, R. S.; Agamah, A. M.; Jiang, C.; Gu, H.; Barbour, D. L.; Schlaggar, B. L.; Limbrick, D. D.; Rubin, J. B.; Shimony, J. S.; and Perkins, S. M.\n\n\n \n\n\n\n Brain Imaging and Behavior, 17(6): 689–701. December 2023.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{seitzman_cognitive_2023,\n\ttitle = {Cognitive deficits and altered functional brain network organization in pediatric brain tumor patients},\n\tvolume = {17},\n\tissn = {1931-7565},\n\tdoi = {10.1007/s11682-023-00798-y},\n\tabstract = {Survivors of pediatric brain tumors experience significant cognitive deficits from their diagnosis and treatment. The exact mechanisms of cognitive injury are poorly understood, and validated predictors of long-term cognitive outcome are lacking. Resting state functional magnetic resonance imaging allows for the study of the spontaneous fluctuations in bulk neural activity, providing insight into brain organization and function. Here, we evaluated cognitive performance and functional network architecture in pediatric brain tumor patients. Forty-nine patients (7-18 years old) with a primary brain tumor diagnosis underwent resting state imaging during regularly scheduled clinical visits. All patients were tested with a battery of cognitive assessments. Extant data from 139 typically developing children were used as controls. We found that obtaining high-quality imaging data during routine clinical scanning was feasible. Functional network organization was significantly altered in patients, with the largest disruptions observed in patients who received propofol sedation. Awake patients demonstrated significant decreases in association network segregation compared to controls. Interestingly, there was no difference in the segregation of sensorimotor networks. With a median follow-up of 3.1 years, patients demonstrated cognitive deficits in multiple domains of executive function. Finally, there was a weak correlation between decreased default mode network segregation and poor picture vocabulary score. Future work with longer follow-up, longitudinal analyses, and a larger cohort will provide further insight into this potential predictor.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {Brain Imaging and Behavior},\n\tauthor = {Seitzman, Benjamin A. and Anandarajah, Hari and Dworetsky, Ally and McMichael, Alana and Coalson, Rebecca S. and Agamah, A. Miriam and Jiang, Catherine and Gu, Hongjie and Barbour, Dennis L. and Schlaggar, Bradley L. and Limbrick, David D. and Rubin, Joshua B. and Shimony, Joshua S. and Perkins, Stephanie M.},\n\tmonth = dec,\n\tyear = {2023},\n\tpmid = {37695507},\n\tkeywords = {Adolescent, Brain, Brain Mapping, Brain Neoplasms, Brain networks, Child, Cognition, Cognition Disorders, Humans, Magnetic Resonance Imaging, Nerve Net, Neural Pathways, Pediatric brain tumor},\n\tpages = {689--701},\n}\n\n
\n
\n\n\n
\n Survivors of pediatric brain tumors experience significant cognitive deficits from their diagnosis and treatment. The exact mechanisms of cognitive injury are poorly understood, and validated predictors of long-term cognitive outcome are lacking. Resting state functional magnetic resonance imaging allows for the study of the spontaneous fluctuations in bulk neural activity, providing insight into brain organization and function. Here, we evaluated cognitive performance and functional network architecture in pediatric brain tumor patients. Forty-nine patients (7-18 years old) with a primary brain tumor diagnosis underwent resting state imaging during regularly scheduled clinical visits. All patients were tested with a battery of cognitive assessments. Extant data from 139 typically developing children were used as controls. We found that obtaining high-quality imaging data during routine clinical scanning was feasible. Functional network organization was significantly altered in patients, with the largest disruptions observed in patients who received propofol sedation. Awake patients demonstrated significant decreases in association network segregation compared to controls. Interestingly, there was no difference in the segregation of sensorimotor networks. With a median follow-up of 3.1 years, patients demonstrated cognitive deficits in multiple domains of executive function. Finally, there was a weak correlation between decreased default mode network segregation and poor picture vocabulary score. Future work with longer follow-up, longitudinal analyses, and a larger cohort will provide further insight into this potential predictor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerating Executive Function Assessments With Group Sequential Designs.\n \n \n \n \n\n\n \n Rojo, M.; Wong, Q. W.; Pahor, A.; Seitz, A.; Jaeggi, S.; Ramani, G.; Goffney, I.; Gardner, J. R.; and Barbour, D.\n\n\n \n\n\n\n . November 2023.\n Publisher: OSF\n\n\n\n
\n\n\n\n \n \n \"AcceleratingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{rojo_accelerating_2023,\n\ttitle = {Accelerating {Executive} {Function} {Assessments} {With} {Group} {Sequential} {Designs}},\n\turl = {https://osf.io/mbtgk},\n\tdoi = {10.31234/osf.io/mbtgk},\n\tabstract = {Inferences about executive functions (EFs) are commonly drawn via lengthy serial administration of simple independent assessments. Classical methods for EF estimation often require excessive measurements and provide little or no flexibility to dynamically adjust test length for each individual. In order to decrease test duration and mitigate respondent burden, active testing modalities that incorporate more efficient data collection strategies are indispensable. To this end, we propose sequential analysis to improve upon traditional testing methods in behavioral science. In this paper, we show that sequential testing can be used to rapidly screen for a difference in the EF of a given individual with respect to a baseline level. In cognitive tests consisting of repeated identical tasks, a sequential framework can be utilized to actively detect significant differences in cognitive performance with high confidence more rapidly than conventional non-sequential approaches. Ultimately, sequential analysis could be applied to a variety of problems in cognitive and perceptual domains to improve efficiency gains and achieve substantial test length reduction.},\n\tlanguage = {en-us},\n\turldate = {2024-01-21},\n\tauthor = {Rojo, Mariluz and Wong, Quinn Wai and Pahor, Anja and Seitz, Aaron and Jaeggi, Susanne and Ramani, Geetha and Goffney, Imani and Gardner, Jacob R. and Barbour, Dennis},\n\tmonth = nov,\n\tyear = {2023},\n\tnote = {Publisher: OSF},\n}\n\n
\n
\n\n\n
\n Inferences about executive functions (EFs) are commonly drawn via lengthy serial administration of simple independent assessments. Classical methods for EF estimation often require excessive measurements and provide little or no flexibility to dynamically adjust test length for each individual. In order to decrease test duration and mitigate respondent burden, active testing modalities that incorporate more efficient data collection strategies are indispensable. To this end, we propose sequential analysis to improve upon traditional testing methods in behavioral science. In this paper, we show that sequential testing can be used to rapidly screen for a difference in the EF of a given individual with respect to a baseline level. In cognitive tests consisting of repeated identical tasks, a sequential framework can be utilized to actively detect significant differences in cognitive performance with high confidence more rapidly than conventional non-sequential approaches. Ultimately, sequential analysis could be applied to a variety of problems in cognitive and perceptual domains to improve efficiency gains and achieve substantial test length reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scalable Probabilistic Modeling of Working Memory Performance.\n \n \n \n \n\n\n \n Rojo, M.; Maddula, P.; Fu, D.; Guo, M.; Zheng, E.; Grande, Á.; Pahor, A.; Jaeggi, S.; Seitz, A.; Goffney, I.; Ramani, G.; Gardner, J. R.; and Barbour, D.\n\n\n \n\n\n\n . November 2023.\n Publisher: OSF\n\n\n\n
\n\n\n\n \n \n \"ScalablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{rojo_scalable_2023,\n\ttitle = {Scalable {Probabilistic} {Modeling} of {Working} {Memory} {Performance}},\n\turl = {https://osf.io/nq6yg},\n\tdoi = {10.31234/osf.io/nq6yg},\n\tabstract = {A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30\\% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.},\n\tlanguage = {en-us},\n\turldate = {2024-01-21},\n\tauthor = {Rojo, Mariluz and Maddula, Pranav and Fu, Dan and Guo, Michael and Zheng, Ethan and Grande, Álvaro and Pahor, Anja and Jaeggi, Susanne and Seitz, Aaron and Goffney, Imani and Ramani, Geetha and Gardner, Jacob R. and Barbour, Dennis},\n\tmonth = nov,\n\tyear = {2023},\n\tnote = {Publisher: OSF},\n}\n\n
\n
\n\n\n
\n A standard approach for evaluating a cognitive variable involves designing a test procedure targeting that variable and then validating test results in a sample population. To extend this functionality to other variables, additional tests are designed and validated in the same way. Test batteries are constructed by concatenating individual tests. This approach is convenient for the designer because it is modular. However, it is not scalable because total testing time grows proportionally with test count, limiting the practical size of a test battery. Cross-test models can inform the relationships between explicit or implicit cognitive variables but do not shorten test time and cannot readily accommodate subpopulations who exhibit different relationships than average. An alternate modeling framework using probabilistic machine learning can rectify these shortcomings, resulting in item-level prediction from individualized models while requiring fewer data points than current methods. To validate this approach, a Gaussian process probabilistic classifier was used to model young adult and simulated spatial working memory task performance as a psychometric function. This novel test instrument was evaluated for accuracy, reliability and efficiency relative to a conventional method recording the maximum spatial sequence length recalled. The novel method exhibited extremely low bias, as well as test-retest reliability 30% higher than the conventional method under standard testing conditions. Efficiency was consistent with other adaptive psychometric threshold estimation strategies, with 30–50 samples needed for consistently reliable estimates. While these results demonstrate that similar spatial working memory tasks can be effectively modeled as psychometric functions by any method, the advantage of the novel method is that it is scalable to accommodate much more complex models, such as those including additional executive functions. Further, it was designed with tremendous flexibility to incorporate informative theory, ancillary data, previous cohort performance, previous individual performance, and/or current individual performance for improved predictions. The result is a promising method for behavioral modeling that can be readily extended to capture complex individual task performance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Emerging Hearing Assessment Technologies For Patient Care.\n \n \n \n \n\n\n \n Wasmann, J.; and Barbour, D. L.\n\n\n \n\n\n\n The Hearing Journal, 74(3): 44–45. March 2021.\n \n\n\n\n
\n\n\n\n \n \n \"EmergingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{wasmann_emerging_2021,\n\ttitle = {Emerging {Hearing} {Assessment} {Technologies} {For} {Patient} {Care}},\n\tvolume = {74},\n\turl = {https://journals.lww.com/thehearingjournal/blog/OnlineFirst/pages/post.aspx?PostID=93},\n\tabstract = {An abstract is unavailable.},\n\tlanguage = {en-US},\n\tnumber = {3},\n\turldate = {2021-02-18},\n\tjournal = {The Hearing Journal},\n\tauthor = {Wasmann, Jan-Willem and {Barbour, D. L.}},\n\tmonth = mar,\n\tyear = {2021},\n\tpages = {44--45},\n}\n\n
\n
\n\n\n
\n An abstract is unavailable.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance and Potential of Machine Learning Audiometry.\n \n \n \n \n\n\n \n Barbour, D. L.; and Wasmann, J.\n\n\n \n\n\n\n The Hearing Journal, 74(3): 40–44. March 2021.\n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{barbour_d_l_performance_2021,\n\ttitle = {Performance and {Potential} of {Machine} {Learning} {Audiometry}},\n\tvolume = {74},\n\turl = {https://journals.lww.com/thehearingjournal/blog/OnlineFirst/pages/post.aspx?PostID=92},\n\tabstract = {An abstract is unavailable.},\n\tlanguage = {en-US},\n\tnumber = {3},\n\turldate = {2021-02-18},\n\tjournal = {The Hearing Journal},\n\tauthor = {{Barbour, D. L.} and Wasmann, Jan-Willem},\n\tmonth = mar,\n\tyear = {2021},\n\tpages = {40--44},\n}\n\n
\n
\n\n\n
\n An abstract is unavailable.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerating Psychometric Screening Tests with Prior Information.\n \n \n \n \n\n\n \n Larsen, T.; Malkomes, G.; and Barbour, D. L.\n\n\n \n\n\n\n In Shaban-Nejad, A.; Michalowski, M.; and Buckeridge, D. L., editor(s), Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability, of Studies in Computational Intelligence, pages 305–311. Springer International Publishing, Cham, 2021.\n \n\n\n\n
\n\n\n\n \n \n \"AcceleratingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{larsen_accelerating_2021,\n\taddress = {Cham},\n\tseries = {Studies in {Computational} {Intelligence}},\n\ttitle = {Accelerating {Psychometric} {Screening} {Tests} with {Prior} {Information}},\n\tisbn = {978-3-030-53352-6},\n\turl = {https://doi.org/10.1007/978-3-030-53352-6_29},\n\tabstract = {Classical methods for psychometric function estimation either require excessive measurements or produce only a low-resolution approximation of the target psychometric function. In this paper, we propose solutions for rapid high-resolution approximation of the psychometric function of a patient given her or his prior exam. We develop a rapid screening algorithm for a change in the psychometric function estimation of a patient. We use Bayesian active model selection to perform an automated pure-tone audiometry test with the goal of quickly finding if the current estimation will be different from the previous one. We validate our methods using audiometric data from the National Institute for Occupational Safety and Health (niosh). Initial results indicate that with a few tones we can (i) detect if the patient’s audiometric function has changed between the two test sessions with high confidence, and (ii) learn high-resolution approximations of the target psychometric function.},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tbooktitle = {Explainable {AI} in {Healthcare} and {Medicine}: {Building} a {Culture} of {Transparency} and {Accountability}},\n\tpublisher = {Springer International Publishing},\n\tauthor = {Larsen, Trevor and Malkomes, Gustavo and {Barbour, D. L.}},\n\teditor = {Shaban-Nejad, Arash and Michalowski, Martin and Buckeridge, David L.},\n\tyear = {2021},\n\tdoi = {10.1007/978-3-030-53352-6_29},\n\tpages = {305--311},\n}\n\n
\n
\n\n\n
\n Classical methods for psychometric function estimation either require excessive measurements or produce only a low-resolution approximation of the target psychometric function. In this paper, we propose solutions for rapid high-resolution approximation of the psychometric function of a patient given her or his prior exam. We develop a rapid screening algorithm for a change in the psychometric function estimation of a patient. We use Bayesian active model selection to perform an automated pure-tone audiometry test with the goal of quickly finding if the current estimation will be different from the previous one. We validate our methods using audiometric data from the National Institute for Occupational Safety and Health (niosh). Initial results indicate that with a few tones we can (i) detect if the patient’s audiometric function has changed between the two test sessions with high confidence, and (ii) learn high-resolution approximations of the target psychometric function.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Precision Clinical Trials: A Framework for Getting to Precision Medicine for Neurobehavioural Disorders.\n \n \n \n \n\n\n \n Lenze, E. J.; Nicol, G. E.; Barbour, D. L.; Kannampallil, T.; Wong, A. W. K.; Piccirillo, J.; Drysdale, A. T.; Sylvester, C. M.; Haddad, R.; Miller, J. P.; Low, C. A.; Lenze, S. N.; Freedland, K. E.; and Rodebaugh, T. L.\n\n\n \n\n\n\n Journal of Psychiatry & Neuroscience: JPN, 46(1): E97–E110. January 2021.\n \n\n\n\n
\n\n\n\n \n \n \"PrecisionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lenze_precision_2021,\n\ttitle = {Precision {Clinical} {Trials}: {A} {Framework} for {Getting} to {Precision} {Medicine} for {Neurobehavioural} {Disorders}},\n\tvolume = {46},\n\tissn = {1488-2434},\n\tshorttitle = {Precision clinical trials},\n\turl = {https://www.jpn.ca/content/46/1/E97.long},\n\tdoi = {10.1503/jpn.200042},\n\tabstract = {The goal of precision medicine (individually tailored treatments) is not being achieved for neurobehavioural conditions such as psychiatric disorders. Traditional randomized clinical trial methods are insufficient for advancing precision medicine because of the dynamic complexity of these conditions. We present a pragmatic solution: the precision clinical trial framework, encompassing methods for individually tailored treatments. This framework includes the following: (1) treatment-targeted enrichment, which involves measuring patients' response after a brief bout of an intervention, and then randomizing patients to a full course of treatment, using the acute response to predict long-term outcomes; (2) adaptive treatments, which involve adjusting treatment parameters during the trial to individually optimize the treatment; and (3) precise measurement, which involves measuring predictor and outcome variables with high accuracy and reliability using techniques such as ecological momentary assessment. This review summarizes precision clinical trials and provides a research agenda, including new biomarkers such as precision neuroimaging, transcranial magnetic stimulation-electroencephalogram digital phenotyping and advances in statistical and machine-learning models. Validation of these approaches - and then widespread incorporation of the precision clinical trial framework - could help achieve the vision of precision medicine for neurobehavioural conditions.},\n\tlanguage = {eng},\n\tnumber = {1},\n\tjournal = {Journal of Psychiatry \\& Neuroscience: JPN},\n\tauthor = {Lenze, Eric J. and Nicol, Ginger E. and {Barbour, D. L.} and Kannampallil, Thomas and Wong, Alex W. K. and Piccirillo, Jay and Drysdale, Andrew T. and Sylvester, Chad M. and Haddad, Rita and Miller, J. Philip and Low, Carissa A. and Lenze, Shannon N. and Freedland, Kenneth E. and Rodebaugh, Thomas L.},\n\tmonth = jan,\n\tyear = {2021},\n\tpmid = {33206039},\n\tpages = {E97--E110},\n}\n\n
\n
\n\n\n
\n The goal of precision medicine (individually tailored treatments) is not being achieved for neurobehavioural conditions such as psychiatric disorders. Traditional randomized clinical trial methods are insufficient for advancing precision medicine because of the dynamic complexity of these conditions. We present a pragmatic solution: the precision clinical trial framework, encompassing methods for individually tailored treatments. This framework includes the following: (1) treatment-targeted enrichment, which involves measuring patients' response after a brief bout of an intervention, and then randomizing patients to a full course of treatment, using the acute response to predict long-term outcomes; (2) adaptive treatments, which involve adjusting treatment parameters during the trial to individually optimize the treatment; and (3) precise measurement, which involves measuring predictor and outcome variables with high accuracy and reliability using techniques such as ecological momentary assessment. This review summarizes precision clinical trials and provides a research agenda, including new biomarkers such as precision neuroimaging, transcranial magnetic stimulation-electroencephalogram digital phenotyping and advances in statistical and machine-learning models. Validation of these approaches - and then widespread incorporation of the precision clinical trial framework - could help achieve the vision of precision medicine for neurobehavioural conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast, Continuous Psychometric Estimation System Utilizing Machine Learning and Associated Method of Use.\n \n \n \n \n\n\n \n Barbour, D. L.; Song, X.; Ledbetter, N.; Gardner, J.; and Weinberger, K.\n\n\n \n\n\n\n June 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Fast,Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@patent{barbour_d_l_fast_2021,\n\ttitle = {Fast, {Continuous} {Psychometric} {Estimation} {System} {Utilizing} {Machine} {Learning} and {Associated} {Method} of {Use}},\n\turl = {https://patents.google.com/patent/US11037677B2},\n\tnationality = {US},\n\tlanguage = {en},\n\tassignee = {Washington University in St Louis WUSTL},\n\tnumber = {US11037677B2},\n\turldate = {2021-08-27},\n\tauthor = {{Barbour, D. L.} and Song, Xinyu and Ledbetter, Noah and Gardner, Jacob and Weinberger, Kilian},\n\tmonth = jun,\n\tyear = {2021},\n\tkeywords = {function, inducer, machine learning, stimulus, subject},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age.\n \n \n \n \n\n\n \n Wasmann, J. A.; Lanting, C. P.; Huinck, W. J.; Mylanus, E. A. M.; van der Laak, J. W. M.; Govaerts, P. J.; Swanepoel, D. W.; Moore, D. R.; and Barbour, D. L.\n\n\n \n\n\n\n Ear and Hearing, 42(6): 1499–1507. December 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{wasmann_computational_2021,\n\ttitle = {Computational {Audiology}: {New} {Approaches} to {Advance} {Hearing} {Health} {Care} in the {Digital} {Age}},\n\tvolume = {42},\n\tissn = {1538-4667},\n\tshorttitle = {Computational {Audiology}},\n\turl = {https://journals.lww.com/ear-hearing/Fulltext/2021/11000/Computational_Audiology__New_Approaches_to_Advance.5.aspx},\n\tdoi = {10.1097/AUD.0000000000001041},\n\tabstract = {The global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this article, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via interoperable systems using shared data and where health care providers adopt expanded roles within a network of distributed expertise. This effort should take place in a health care system where privacy, responsibility of each stakeholder, and patients' safety and autonomy are all guarded by design.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {Ear and Hearing},\n\tauthor = {Wasmann, Jan-Willem A. and Lanting, Cris P. and Huinck, Wendy J. and Mylanus, Emmanuel A. M. and van der Laak, Jeroen W. M. and Govaerts, Paul J. and Swanepoel, De Wet and Moore, David R. and Barbour, D. L.},\n\tmonth = dec,\n\tyear = {2021},\n\tpmid = {33675587},\n\tpmcid = {PMC8417156},\n\tpages = {1499--1507},\n}\n\n
\n
\n\n\n
\n The global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this article, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via interoperable systems using shared data and where health care providers adopt expanded roles within a network of distributed expertise. This effort should take place in a health care system where privacy, responsibility of each stakeholder, and patients' safety and autonomy are all guarded by design.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Hidden Hearing Loss: Mixed Effects of Compensatory Plasticity.\n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n Current Biology, 30(23): R1433–R1436. December 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{barbour_d_l_hidden_2020,\n\ttitle = {Hidden {Hearing} {Loss}: {Mixed} {Effects} of {Compensatory} {Plasticity}},\n\tvolume = {30},\n\tissn = {1879-0445},\n\tshorttitle = {Hidden {Hearing} {Loss}},\n\tdoi = {10.1016/j.cub.2020.09.053},\n\tabstract = {Hidden hearing loss manifests as speech perception difficulties with normal hearing thresholds. A new study shows that neural compensation induced by this disorder may actually improve speech perception under narrow conditions within an overall profile of degradation.},\n\tlanguage = {eng},\n\tnumber = {23},\n\tjournal = {Current Biology},\n\tauthor = {{Barbour, D. L.}},\n\tmonth = dec,\n\tyear = {2020},\n\tpmid = {33290713},\n\tpages = {R1433--R1436},\n}\n\n
\n
\n\n\n
\n Hidden hearing loss manifests as speech perception difficulties with normal hearing thresholds. A new study shows that neural compensation induced by this disorder may actually improve speech perception under narrow conditions within an overall profile of degradation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive Deficits and Altered Functional Brain Network Organization in Pediatric Brain Tumor Patients.\n \n \n \n \n\n\n \n Anandarajah, H.; Seitzman, B. A.; McMichael, A.; Dworetsky, A.; Coalson, R. S.; Jiang, C.; Gu, H.; Barbour, D. L.; Schlaggar, B. L.; Limbrick, D. D.; Rubin, J. B.; Shimony, J. S.; and Perkins, S. M.\n\n\n \n\n\n\n bioRxiv,2020.04.22.055459. May 2020.\n Publisher: Cold Spring Harbor Laboratory Section: New Results\n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{anandarajah_cognitive_2020,\n\ttitle = {Cognitive {Deficits} and {Altered} {Functional} {Brain} {Network} {Organization} in {Pediatric} {Brain} {Tumor} {Patients}},\n\tcopyright = {© 2020, Posted by Cold Spring Harbor Laboratory. This pre-print is available under a Creative Commons License (Attribution-NonCommercial-NoDerivs 4.0 International), CC BY-NC-ND 4.0, as described at http://creativecommons.org/licenses/by-nc-nd/4.0/},\n\turl = {https://www.biorxiv.org/content/10.1101/2020.04.22.055459v2},\n\tdoi = {10.1101/2020.04.22.055459},\n\tabstract = {Pediatric brain tumor survivors experience significant cognitive sequelae from their diagnosis and treatment. The exact mechanisms of cognitive injury are poorly understood, and validated predictors of long-term cognitive outcome are lacking. Large-scale, distributed brain systems provide a window into brain organization and function that may yield insight into these mechanisms and outcomes.{\\textless}/p{\\textgreater}{\\textless}p{\\textgreater}Here, we evaluated functional network architecture, cognitive performance, and brain-behavior relationships in pediatric brain tumor patients. Patients ages 4-18 years old with diagnosis of a brain tumor underwent awake resting state fMRI during regularly scheduled clinical visits and were tested with the NIH Toolbox Cognition Battery. We observed that functional network organization was significantly altered in patients compared to age- and sex-matched healthy controls, with the integrity of the dorsal attention network particularly affected. Moreover, patients demonstrated significant impairments in multiple domains of cognitive performance, including attention. Finally, a significant amount of variance of age-adjusted total composite scores from the Toolbox was explained by changes in segregation between the dorsal attention and default mode networks.{\\textless}/p{\\textgreater}{\\textless}p{\\textgreater}Our results suggest that changes in functional network organization may provide insight into long-term changes in cognitive function in pediatric brain tumor patients.{\\textless}/p{\\textgreater}},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tjournal = {bioRxiv},\n\tauthor = {Anandarajah, Hari and Seitzman, Benjamin A. and McMichael, Alana and Dworetsky, Ally and Coalson, Rebecca S. and Jiang, Catherine and Gu, Hongjie and {Barbour, D. L.} and Schlaggar, Bradley L. and Limbrick, David D. and Rubin, Joshua B. and Shimony, Joshua S. and Perkins, Stephanie M.},\n\tmonth = may,\n\tyear = {2020},\n\tnote = {Publisher: Cold Spring Harbor Laboratory\nSection: New Results},\n\tpages = {2020.04.22.055459},\n}\n\n
\n
\n\n\n
\n Pediatric brain tumor survivors experience significant cognitive sequelae from their diagnosis and treatment. The exact mechanisms of cognitive injury are poorly understood, and validated predictors of long-term cognitive outcome are lacking. Large-scale, distributed brain systems provide a window into brain organization and function that may yield insight into these mechanisms and outcomes.\\textless/p\\textgreater\\textlessp\\textgreaterHere, we evaluated functional network architecture, cognitive performance, and brain-behavior relationships in pediatric brain tumor patients. Patients ages 4-18 years old with diagnosis of a brain tumor underwent awake resting state fMRI during regularly scheduled clinical visits and were tested with the NIH Toolbox Cognition Battery. We observed that functional network organization was significantly altered in patients compared to age- and sex-matched healthy controls, with the integrity of the dorsal attention network particularly affected. Moreover, patients demonstrated significant impairments in multiple domains of cognitive performance, including attention. Finally, a significant amount of variance of age-adjusted total composite scores from the Toolbox was explained by changes in segregation between the dorsal attention and default mode networks.\\textless/p\\textgreater\\textlessp\\textgreaterOur results suggest that changes in functional network organization may provide insight into long-term changes in cognitive function in pediatric brain tumor patients.\\textless/p\\textgreater\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Atypical Resting State Functional Connectivity and Deficits in Cognition in Pediatric Brain Tumor Patients Treated with Proton Beam Radiation.\n \n \n \n \n\n\n \n Anandarajah, H.; Jiang, C.; McMichael, A.; Dworetsky, A.; Coalson, R. S.; Gu, H.; Barbour, D. L.; Seitzman, B. A.; Schlaggar, B. L.; and Limbrick, D. D.\n\n\n \n\n\n\n International Journal of Radiation Oncology, Biology, Physics, 108(3): S127–S128. 2020.\n Publisher: Elsevier\n\n\n\n
\n\n\n\n \n \n \"AtypicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{anandarajah_atypical_2020,\n\ttitle = {Atypical {Resting} {State} {Functional} {Connectivity} and {Deficits} in {Cognition} in {Pediatric} {Brain} {Tumor} {Patients} {Treated} with {Proton} {Beam} {Radiation}},\n\tvolume = {108},\n\turl = {https://www.redjournal.org/article/S0360-3016(20)32272-0/fulltext},\n\tdoi = {10.1016/j.ijrobp.2020.07.853},\n\tnumber = {3},\n\tjournal = {International Journal of Radiation Oncology, Biology, Physics},\n\tauthor = {Anandarajah, H. and Jiang, C. and McMichael, A. and Dworetsky, A. and Coalson, R. S. and Gu, H. and {Barbour, D. L.} and Seitzman, B. A. and Schlaggar, B. L. and Limbrick, D. D.},\n\tyear = {2020},\n\tnote = {Publisher: Elsevier},\n\tpages = {S127--S128},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamically Masked Audiograms With Machine Learning Audiometry.\n \n \n \n \n\n\n \n Heisey, K. L.; Walker, A. M.; Xie, K.; Abrams, J. M.; and Barbour, D. L.\n\n\n \n\n\n\n Ear and Hearing, 41(6): 1692–1702. December 2020.\n \n\n\n\n
\n\n\n\n \n \n \"DynamicallyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{heisey_dynamically_2020,\n\ttitle = {Dynamically {Masked} {Audiograms} {With} {Machine} {Learning} {Audiometry}},\n\tvolume = {41},\n\tissn = {1538-4667},\n\turl = {https://journals.lww.com/ear-hearing/Abstract/2020/11000/Dynamically_Masked_Audiograms_With_Machine.25.aspx},\n\tdoi = {10.1097/AUD.0000000000000891},\n\tabstract = {OBJECTIVES: When one ear of an individual can hear significantly better than the other ear, evaluating the worse ear with loud probe tones may require delivering masking noise to the better ear to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious, and time consuming. Adding a standardized masking protocol to an active machine learning audiogram procedure could potentially alleviate all of these drawbacks by dynamically adapting the masking as needed for each individual. The goal of this study is to determine the accuracy and efficiency of automated machine learning masking for obtaining true hearing thresholds.\nDESIGN: Dynamically masked automated audiograms were collected for 29 participants between the ages of 21 and 83 (mean 43, SD 20) with a wide range of hearing abilities. Normal-hearing listeners were given unmasked and masked machine learning audiogram tests. Listeners with hearing loss were given a standard audiogram test by an audiologist, with masking stimuli added as clinically determined, followed by a masked machine learning audiogram test. The hearing thresholds estimated for each pair of techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz).\nRESULTS: Masked and unmasked machine learning audiogram threshold estimates matched each other well in normal-hearing listeners, with a mean absolute difference between threshold estimates of 3.4 dB. Masked machine learning audiogram thresholds also matched well the thresholds determined by a conventional masking procedure, with a mean absolute difference between threshold estimates for listeners with low asymmetry and high asymmetry between the ears, respectively, of 4.9 and 2.6 dB. Notably, out of 6200 masked machine learning audiogram tone deliveries for this study, no instances of tones detected by the nontest ear were documented. The machine learning methods were also generally faster than the manual methods, and for some listeners, substantially so.\nCONCLUSIONS: Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared with current clinical masking procedures. Dynamic masking is a compelling alternative to the methods currently used to evaluate individuals with highly asymmetric hearing, yet can also be used effectively and efficiently for anyone.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {Ear and Hearing},\n\tauthor = {Heisey, Katherine L. and Walker, Alexandra M. and Xie, Kevin and Abrams, Jenna M. and Barbour, Dennis L.},\n\tmonth = dec,\n\tyear = {2020},\n\tpmid = {33136643},\n\tpmcid = {PMC7725866},\n\tpages = {1692--1702},\n}\n\n
\n
\n\n\n
\n OBJECTIVES: When one ear of an individual can hear significantly better than the other ear, evaluating the worse ear with loud probe tones may require delivering masking noise to the better ear to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious, and time consuming. Adding a standardized masking protocol to an active machine learning audiogram procedure could potentially alleviate all of these drawbacks by dynamically adapting the masking as needed for each individual. The goal of this study is to determine the accuracy and efficiency of automated machine learning masking for obtaining true hearing thresholds. DESIGN: Dynamically masked automated audiograms were collected for 29 participants between the ages of 21 and 83 (mean 43, SD 20) with a wide range of hearing abilities. Normal-hearing listeners were given unmasked and masked machine learning audiogram tests. Listeners with hearing loss were given a standard audiogram test by an audiologist, with masking stimuli added as clinically determined, followed by a masked machine learning audiogram test. The hearing thresholds estimated for each pair of techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). RESULTS: Masked and unmasked machine learning audiogram threshold estimates matched each other well in normal-hearing listeners, with a mean absolute difference between threshold estimates of 3.4 dB. Masked machine learning audiogram thresholds also matched well the thresholds determined by a conventional masking procedure, with a mean absolute difference between threshold estimates for listeners with low asymmetry and high asymmetry between the ears, respectively, of 4.9 and 2.6 dB. Notably, out of 6200 masked machine learning audiogram tone deliveries for this study, no instances of tones detected by the nontest ear were documented. The machine learning methods were also generally faster than the manual methods, and for some listeners, substantially so. CONCLUSIONS: Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared with current clinical masking procedures. Dynamic masking is a compelling alternative to the methods currently used to evaluate individuals with highly asymmetric hearing, yet can also be used effectively and efficiently for anyone.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Field Estimation by Probabilistic Classification.\n \n \n \n \n\n\n \n Chesley, B.; and Barbour, D. L.\n\n\n \n\n\n\n IEEE Journal of Biomedical and Health Informatics, 24(12): 3499–3506. December 2020.\n \n\n\n\n
\n\n\n\n \n \n \"VisualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{chesley_visual_2020,\n\ttitle = {Visual {Field} {Estimation} by {Probabilistic} {Classification}},\n\tvolume = {24},\n\tissn = {2168-2208},\n\turl = {https://ieeexplore.ieee.org/document/9106830},\n\tdoi = {10.1109/JBHI.2020.2999567},\n\tabstract = {The gold standard clinical tool for evaluating visual dysfunction in cases of glaucoma and other disorders of vision remains the visual field or threshold perimetry exam. Administration of this exam has evolved over the years into a sophisticated, standardized, automated algorithm that relies heavily on specifics of disease processes particular to common retinal disorders. The purpose of this study is to evaluate the utility of a novel general estimator applied to visual field testing. A multidimensional psychometric function estimation tool was applied to visual field estimation. This tool is built on semiparametric probabilistic classification rather than multiple logistic regression. It combines the flexibility of nonparametric estimators and the efficiency of parametric estimators. Simulated visual fields were generated from human patients with a variety of diagnoses, and the errors between simulated ground truth and estimated visual fields were quantified. Error rates of the estimates were low, typically within 2 dB units of ground truth on average. The greatest threshold errors appeared to be confined to the portions of the threshold function with the highest spatial frequencies. This method can accurately estimate a variety of visual field profiles with continuous threshold estimates, potentially using a relatively small number of stimuli.},\n\tlanguage = {eng},\n\tnumber = {12},\n\tjournal = {IEEE Journal of Biomedical and Health Informatics},\n\tauthor = {Chesley, Brian and {Barbour, D. L.}},\n\tmonth = dec,\n\tyear = {2020},\n\tpmid = {32750922},\n\tpages = {3499--3506},\n}\n\n
\n
\n\n\n
\n The gold standard clinical tool for evaluating visual dysfunction in cases of glaucoma and other disorders of vision remains the visual field or threshold perimetry exam. Administration of this exam has evolved over the years into a sophisticated, standardized, automated algorithm that relies heavily on specifics of disease processes particular to common retinal disorders. The purpose of this study is to evaluate the utility of a novel general estimator applied to visual field testing. A multidimensional psychometric function estimation tool was applied to visual field estimation. This tool is built on semiparametric probabilistic classification rather than multiple logistic regression. It combines the flexibility of nonparametric estimators and the efficiency of parametric estimators. Simulated visual fields were generated from human patients with a variety of diagnoses, and the errors between simulated ground truth and estimated visual fields were quantified. Error rates of the estimates were low, typically within 2 dB units of ground truth on average. The greatest threshold errors appeared to be confined to the portions of the threshold function with the highest spatial frequencies. This method can accurately estimate a variety of visual field profiles with continuous threshold estimates, potentially using a relatively small number of stimuli.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Domestic Service Corps™.\n \n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n Technical Report OSF Preprints, May 2020.\n \n\n\n\n
\n\n\n\n \n \n \"DomesticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@techreport{barbour_d_l_domestic_2020,\n\ttitle = {Domestic {Service} {Corps}™},\n\turl = {https://osf.io/524sb/},\n\tabstract = {The Domestic Service CorpsTM (DSC) is an organization built upon a technology platform to link members together dynamically in order to brainstorm potential solutions to problems. These problems are posed by government employee clients (federal, state, local) of theoretical or applied significance. Principles shown to be successful at distributing problems solvable by individuals will be adopted, such as human intelligence tests with Mechanical Turk and protein conformations with fold.it. Successful strategies used in matchmaking algorithms such as eHarmony and the medical residents' match will be used to dynamically create compatible working groups among DSC members more diverse than is typically possible. Time spent by any member on the platform will be limited in order to retain high-quality engagement, extend interactions over longer periods to engage more creative processes, and protect members from overuse. Participation is expected to aid members as well as clients, engaging Americans not typically able to serve full-time or in-person and imbuing them with a sense of purpose expected to deliver advantages consistent with known mental benefits of service to others.},\n\turldate = {2020-11-11},\n\tinstitution = {OSF Preprints},\n\tauthor = {{Barbour, D. L.}},\n\tmonth = may,\n\tyear = {2020},\n\tdoi = {10.31219/osf.io/524sb},\n\tkeywords = {Physical Sciences and Mathematics, Psychology, Social and Behavioral Sciences},\n}\n\n
\n
\n\n\n
\n The Domestic Service CorpsTM (DSC) is an organization built upon a technology platform to link members together dynamically in order to brainstorm potential solutions to problems. These problems are posed by government employee clients (federal, state, local) of theoretical or applied significance. Principles shown to be successful at distributing problems solvable by individuals will be adopted, such as human intelligence tests with Mechanical Turk and protein conformations with fold.it. Successful strategies used in matchmaking algorithms such as eHarmony and the medical residents' match will be used to dynamically create compatible working groups among DSC members more diverse than is typically possible. Time spent by any member on the platform will be limited in order to retain high-quality engagement, extend interactions over longer periods to engage more creative processes, and protect members from overuse. Participation is expected to aid members as well as clients, engaging Americans not typically able to serve full-time or in-person and imbuing them with a sense of purpose expected to deliver advantages consistent with known mental benefits of service to others.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerating Psychometric Screening Tests With Bayesian Active Differential Selection.\n \n \n \n \n\n\n \n Larsen, T. J.; Malkomes, G.; and Barbour, D. L.\n\n\n \n\n\n\n arXiv:2002.01547 [cs, stat]. February 2020.\n arXiv: 2002.01547\n\n\n\n
\n\n\n\n \n \n \"AcceleratingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{larsen_accelerating_2020,\n\ttitle = {Accelerating {Psychometric} {Screening} {Tests} {With} {Bayesian} {Active} {Differential} {Selection}},\n\turl = {http://arxiv.org/abs/2002.01547},\n\tabstract = {Classical methods for psychometric function estimation either require excessive measurements or produce only a low-resolution approximation of the target psychometric function. In this paper, we propose a novel solution for rapid screening for a change in the psychometric function estimation of a given patient. We use Bayesian active model selection to perform an automated pure-tone audiogram test with the goal of quickly finding if the current audiogram will be different from a previous audiogram. We validate our approach using audiometric data from the National Institute for Occupational Safety and Health NIOSH. Initial results show that with a few tones we can detect if the patient's audiometric function has changed between the two test sessions with high confidence.},\n\turldate = {2020-11-11},\n\tjournal = {arXiv:2002.01547 [cs, stat]},\n\tauthor = {Larsen, Trevor J. and Malkomes, Gustavo and {Barbour, D. L.}},\n\tmonth = feb,\n\tyear = {2020},\n\tnote = {arXiv: 2002.01547},\n\tkeywords = {Computer Science - Machine Learning, Statistics - Machine Learning},\n}\n\n
\n
\n\n\n
\n Classical methods for psychometric function estimation either require excessive measurements or produce only a low-resolution approximation of the target psychometric function. In this paper, we propose a novel solution for rapid screening for a change in the psychometric function estimation of a given patient. We use Bayesian active model selection to perform an automated pure-tone audiogram test with the goal of quickly finding if the current audiogram will be different from a previous audiogram. We validate our approach using audiometric data from the National Institute for Occupational Safety and Health NIOSH. Initial results show that with a few tones we can detect if the patient's audiometric function has changed between the two test sessions with high confidence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Talking Points: A Modulating Circle Increases Listening Effort Without Improving Speech Recognition in Young Adults.\n \n \n \n\n\n \n Strand, J. F.; Brown, V. A.; and Barbour, D. L.\n\n\n \n\n\n\n Psychonomic Bulletin & Review, 27(3): 536–543. June 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{strand_talking_2020,\n\ttitle = {Talking {Points}: {A} {Modulating} {Circle} {Increases} {Listening} {Effort} {Without} {Improving} {Speech} {Recognition} in {Young} {Adults}},\n\tvolume = {27},\n\tissn = {1531-5320},\n\tshorttitle = {Talking {Points}},\n\tdoi = {10.3758/s13423-020-01713-y},\n\tabstract = {Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425, 1969; Sumby \\& Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall, Kroos, Jozan, \\& Vatikiotis-Bateson in Perception \\& Psychophysics, 66(4), 574-583, 2004), and static or moving shapes can improve speech detection accuracy (Bernstein, Auer, \\& Takayanagi in Speech Communication, 44(1-4), 5-18, 2004), aspects of the visual signal other than fine phonetic detail may also contribute to the perception of speech. In two experiments, we show that a modulating circle providing information about the onset, offset, and acoustic amplitude envelope of the speech does not improve recognition of spoken sentences (Experiment 1) or words (Experiment 2). Further, contrary to our hypothesis, the modulating circle increased listening effort despite subjective reports that it made the word recognition task seem easier to complete (Experiment 2). These results suggest that audiovisual speech processing, even when the visual stimulus only conveys temporal information about the acoustic signal, may be a cognitively demanding process.},\n\tlanguage = {eng},\n\tnumber = {3},\n\tjournal = {Psychonomic Bulletin \\& Review},\n\tauthor = {Strand, Julia F. and Brown, Violet A. and {Barbour, D. L.}},\n\tmonth = jun,\n\tyear = {2020},\n\tpmid = {32128719},\n\tkeywords = {cross-modal attention, speech perception, spoken word recognition},\n\tpages = {536--543},\n}\n\n
\n
\n\n\n
\n Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425, 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall, Kroos, Jozan, & Vatikiotis-Bateson in Perception & Psychophysics, 66(4), 574-583, 2004), and static or moving shapes can improve speech detection accuracy (Bernstein, Auer, & Takayanagi in Speech Communication, 44(1-4), 5-18, 2004), aspects of the visual signal other than fine phonetic detail may also contribute to the perception of speech. In two experiments, we show that a modulating circle providing information about the onset, offset, and acoustic amplitude envelope of the speech does not improve recognition of spoken sentences (Experiment 1) or words (Experiment 2). Further, contrary to our hypothesis, the modulating circle increased listening effort despite subjective reports that it made the word recognition task seem easier to complete (Experiment 2). These results suggest that audiovisual speech processing, even when the visual stimulus only conveys temporal information about the acoustic signal, may be a cognitively demanding process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age.\n \n \n \n \n\n\n \n Wasmann, J.; Lanting, C.; Huinck, W.; Mylanus, E.; Laak, J. v. d.; Govaerts, P.; Swanepoel, D. W.; Moore, D. R.; and Barbour, D. L.\n\n\n \n\n\n\n Technical Report PsyArXiv, June 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@techreport{wasmann_computational_2020,\n\ttitle = {Computational {Audiology}: {New} {Approaches} to {Advance} {Hearing} {Health} {Care} in the {Digital} {Age}},\n\tshorttitle = {Computational {Audiology}},\n\turl = {https://psyarxiv.com/hu8eg/},\n\tabstract = {The global digital transformation enables computational audiology for advanced clinical applications that have the potential to impact the global burden of hearing loss. In this paper we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision and efficiency of hearing health care services. In addition, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via open-source models using interoperable shared data and where health care providers adopt new roles within a network of distributed expertise. All of this should take place in a health care system where privacy, the responsibility of each stakeholder and, most importantly, the safety and autonomy of patients are all guarded by design.},\n\turldate = {2020-11-11},\n\tinstitution = {PsyArXiv},\n\tauthor = {Wasmann, Jan-Willem and Lanting, Cris and Huinck, Wendy and Mylanus, Emmanuel and Laak, Jeroen van der and Govaerts, Paul and Swanepoel, De Wet and Moore, David R. and {Barbour, D. L.}},\n\tmonth = jun,\n\tyear = {2020},\n\tdoi = {10.31234/osf.io/hu8eg},\n\tkeywords = {Artificial Intelligence, Big Data, Computational Audiology, Computational Infrastructure, Digital Hearing Health care, Engineering Psychology, Hearing Loss, Machine Learning},\n}\n\n
\n
\n\n\n
\n The global digital transformation enables computational audiology for advanced clinical applications that have the potential to impact the global burden of hearing loss. In this paper we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision and efficiency of hearing health care services. In addition, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via open-source models using interoperable shared data and where health care providers adopt new roles within a network of distributed expertise. All of this should take place in a health care system where privacy, the responsibility of each stakeholder and, most importantly, the safety and autonomy of patients are all guarded by design.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Precision Medicine and the Cursed Dimensions.\n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n NPJ Digital Medicine, 2: 4. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_precision_2019,\n\ttitle = {Precision {Medicine} and the {Cursed} {Dimensions}},\n\tvolume = {2},\n\tissn = {2398-6352},\n\tdoi = {10.1038/s41746-019-0081-5},\n\tabstract = {Our intuition regarding "average" is rooted in one-dimensional thinking, such as the distribution of height across a population. This intuition breaks down in higher dimensions when multiple measurements are combined: fewer individuals are close to average for many measurements simultaneously than for any single measurement alone. This phenomenon is known as the curse of dimensionality. In medicine, diagnostic sophistication generally increases through the addition of more predictive factors. Disease classes themselves become more dissimilar as a result, increasing the difficulty of incorporating (i.e., averaging) multiple patients into a single class for guiding treatment of new patients. Failure to consider the curse of dimensionality will ultimately lead to inherent limits on the degree to which precision medicine can extend the advances of evidence-based medicine for selecting suitable treatments. One strategy to compensate for the curse of dimensionality involves incorporating predictive observation models into the patient workup.},\n\tlanguage = {eng},\n\tjournal = {NPJ Digital Medicine},\n\tauthor = {{Barbour, D. L.}},\n\tyear = {2019},\n\tpmid = {31304354},\n\tpmcid = {PMC6550148},\n\tkeywords = {Diagnostic markers, Translational research},\n\tpages = {4},\n}\n\n
\n
\n\n\n
\n Our intuition regarding \"average\" is rooted in one-dimensional thinking, such as the distribution of height across a population. This intuition breaks down in higher dimensions when multiple measurements are combined: fewer individuals are close to average for many measurements simultaneously than for any single measurement alone. This phenomenon is known as the curse of dimensionality. In medicine, diagnostic sophistication generally increases through the addition of more predictive factors. Disease classes themselves become more dissimilar as a result, increasing the difficulty of incorporating (i.e., averaging) multiple patients into a single class for guiding treatment of new patients. Failure to consider the curse of dimensionality will ultimately lead to inherent limits on the degree to which precision medicine can extend the advances of evidence-based medicine for selecting suitable treatments. One strategy to compensate for the curse of dimensionality involves incorporating predictive observation models into the patient workup.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online Machine Learning Audiometry.\n \n \n \n \n\n\n \n Barbour, D. L.; Howard, R. T.; Song, X. D.; Metzger, N.; Sukesan, K. A.; DiLorenzo, J. C.; Snyder, B. R. D.; Chen, J. Y.; Degen, E. A.; Buchbinder, J. M.; and Heisey, K. L.\n\n\n \n\n\n\n Ear and Hearing, 40(4): 918–926. August 2019.\n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_online_2019,\n\ttitle = {Online {Machine} {Learning} {Audiometry}},\n\tvolume = {40},\n\tissn = {1538-4667},\n\turl = {https://journals.lww.com/ear-hearing/Abstract/2019/07000/Online_Machine_Learning_Audiometry.14.aspx},\n\tdoi = {10.1097/AUD.0000000000000669},\n\tabstract = {OBJECTIVES: A confluence of recent developments in cloud computing, real-time web audio and machine learning psychometric function estimation has made wide dissemination of sophisticated turn-key audiometric assessments possible. The authors have combined these capabilities into an online (i.e., web-based) pure-tone audiogram estimator intended to empower researchers and clinicians with advanced hearing tests without the need for custom programming or special hardware. The objective of this study was to assess the accuracy and reliability of this new online machine learning audiogram method relative to a commonly used hearing threshold estimation technique also implemented online for the first time in the same platform.\nDESIGN: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 19 and 79 years (mean 41, SD 21) exhibiting a wide range of hearing abilities. For each ear, two repetitions of online machine learning audiogram estimation and two repetitions of online modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist using the online software tools. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz).\nRESULTS: The two threshold estimation methods delivered very similar threshold estimates at standard audiogram frequencies. Specifically, the mean absolute difference between threshold estimates was 3.24 ± 5.15 dB. The mean absolute differences between repeated measurements of the online machine learning procedure and between repeated measurements of the Hughson-Westlake procedure were 2.85 ± 6.57 dB and 1.88 ± 3.56 dB, respectively. The machine learning method generated estimates of both threshold and spread (i.e., the inverse of psychometric slope) continuously across the entire frequency range tested from fewer samples on average than the modified Hughson-Westlake procedure required to estimate six discrete thresholds.\nCONCLUSIONS: Online machine learning audiogram estimation in its current form provides all the information of conventional threshold audiometry with similar accuracy and reliability in less time. More importantly, however, this method provides additional audiogram details not provided by other methods. This standardized platform can be readily extended to bone conduction, masking, spectrotemporal modulation, speech perception, etc., unifying audiometric testing into a single comprehensive procedure efficient enough to become part of the standard audiologic workup.},\n\tlanguage = {eng},\n\tnumber = {4},\n\tjournal = {Ear and Hearing},\n\tauthor = {{Barbour, D. L.} and Howard, Rebecca T. and Song, Xinyu D. and Metzger, Nikki and Sukesan, Kiron A. and DiLorenzo, James C. and Snyder, Braham R. D. and Chen, Jeff Y. and Degen, Eleanor A. and Buchbinder, Jenna M. and Heisey, Katherine L.},\n\tmonth = aug,\n\tyear = {2019},\n\tpmid = {30358656},\n\tpmcid = {PMC6476703},\n\tkeywords = {Adult, Aged, Audiometry, Pure-Tone, Female, Hearing Loss, Humans, Internet, Machine Learning, Male, Middle Aged, Reproducibility of Results, Severity of Illness Index, Young Adult},\n\tpages = {918--926},\n}\n\n
\n
\n\n\n
\n OBJECTIVES: A confluence of recent developments in cloud computing, real-time web audio and machine learning psychometric function estimation has made wide dissemination of sophisticated turn-key audiometric assessments possible. The authors have combined these capabilities into an online (i.e., web-based) pure-tone audiogram estimator intended to empower researchers and clinicians with advanced hearing tests without the need for custom programming or special hardware. The objective of this study was to assess the accuracy and reliability of this new online machine learning audiogram method relative to a commonly used hearing threshold estimation technique also implemented online for the first time in the same platform. DESIGN: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 19 and 79 years (mean 41, SD 21) exhibiting a wide range of hearing abilities. For each ear, two repetitions of online machine learning audiogram estimation and two repetitions of online modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist using the online software tools. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). RESULTS: The two threshold estimation methods delivered very similar threshold estimates at standard audiogram frequencies. Specifically, the mean absolute difference between threshold estimates was 3.24 ± 5.15 dB. The mean absolute differences between repeated measurements of the online machine learning procedure and between repeated measurements of the Hughson-Westlake procedure were 2.85 ± 6.57 dB and 1.88 ± 3.56 dB, respectively. The machine learning method generated estimates of both threshold and spread (i.e., the inverse of psychometric slope) continuously across the entire frequency range tested from fewer samples on average than the modified Hughson-Westlake procedure required to estimate six discrete thresholds. CONCLUSIONS: Online machine learning audiogram estimation in its current form provides all the information of conventional threshold audiometry with similar accuracy and reliability in less time. More importantly, however, this method provides additional audiogram details not provided by other methods. This standardized platform can be readily extended to bone conduction, masking, spectrotemporal modulation, speech perception, etc., unifying audiometric testing into a single comprehensive procedure efficient enough to become part of the standard audiologic workup.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Conjoint Psychometric Field Estimation for Bilateral Audiometry.\n \n \n \n \n\n\n \n Barbour, D. L.; DiLorenzo, J. C.; Sukesan, K. A.; Song, X. D.; Chen, J. Y.; Degen, E. A.; Heisey, K. L.; and Garnett, R.\n\n\n \n\n\n\n Behavior Research Methods, 51(3): 1271–1285. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"ConjointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_conjoint_2019,\n\ttitle = {Conjoint {Psychometric} {Field} {Estimation} for {Bilateral} {Audiometry}},\n\tvolume = {51},\n\tissn = {1554-3528},\n\turl = {https://link.springer.com/article/10.3758%2Fs13428-018-1062-3},\n\tdoi = {10.3758/s13428-018-1062-3},\n\tabstract = {Behavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation. The use of active machine-learning kernel methods for behavioral assessment provides extremely flexible yet efficient estimation tools to more thoroughly investigate perceptual or cognitive processes without incurring the penalty of excessive testing time. Audiometry represents perhaps the simplest test case to demonstrate the utility of these techniques. In pure-tone audiometry, hearing is assessed in the two-dimensional input space of frequency and intensity, and the test is repeated for both ears. Although an individual's ears are not linked physiologically, they share many features in common that lead to correlations suitable for exploitation in testing. The bilateral audiogram estimates hearing thresholds in both ears simultaneously by conjoining their separate input domains into a single search space, which can be evaluated efficiently with modern machine-learning methods. The result is the introduction of the first conjoint psychometric function estimation procedure, which consistently delivers accurate results in significantly less time than sequential disjoint estimators.},\n\tlanguage = {eng},\n\tnumber = {3},\n\tjournal = {Behavior Research Methods},\n\tauthor = {{Barbour, D. L.} and DiLorenzo, James C. and Sukesan, Kiron A. and Song, Xinyu D. and Chen, Jeff Y. and Degen, Eleanor A. and Heisey, Katherine L. and Garnett, Roman},\n\tyear = {2019},\n\tpmid = {29949072},\n\tpmcid = {PMC6291374},\n\tkeywords = {Audiometry, Audiometry, Pure-Tone, Auditory Threshold, Hearing, Humans, Machine Learning, Perceptual testing, Psychometric function, Psychometrics, Psychophysics},\n\tpages = {1271--1285},\n}\n\n
\n
\n\n\n
\n Behavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation. The use of active machine-learning kernel methods for behavioral assessment provides extremely flexible yet efficient estimation tools to more thoroughly investigate perceptual or cognitive processes without incurring the penalty of excessive testing time. Audiometry represents perhaps the simplest test case to demonstrate the utility of these techniques. In pure-tone audiometry, hearing is assessed in the two-dimensional input space of frequency and intensity, and the test is repeated for both ears. Although an individual's ears are not linked physiologically, they share many features in common that lead to correlations suitable for exploitation in testing. The bilateral audiogram estimates hearing thresholds in both ears simultaneously by conjoining their separate input domains into a single search space, which can be evaluated efficiently with modern machine-learning methods. The result is the introduction of the first conjoint psychometric function estimation procedure, which consistently delivers accurate results in significantly less time than sequential disjoint estimators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Population Responses Represent Vocalization Identity, Intensity, and Signal-to-Noise Ratio in Primary Auditory Cortex.\n \n \n \n \n\n\n \n Ni, R.; Bender, D. A.; and Barbour, D. L.\n\n\n \n\n\n\n bioRxiv,2019.12.21.886101. December 2019.\n Publisher: Cold Spring Harbor Laboratory Section: New Results\n\n\n\n
\n\n\n\n \n \n \"PopulationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{ni_population_2019,\n\ttitle = {Population {Responses} {Represent} {Vocalization} {Identity}, {Intensity}, and {Signal}-to-{Noise} {Ratio} in {Primary} {Auditory} {Cortex}},\n\tcopyright = {© 2019, Posted by Cold Spring Harbor Laboratory. This pre-print is available under a Creative Commons License (Attribution-NonCommercial-NoDerivs 4.0 International), CC BY-NC-ND 4.0, as described at http://creativecommons.org/licenses/by-nc-nd/4.0/},\n\turl = {https://www.biorxiv.org/content/10.1101/2019.12.21.886101v1},\n\tdoi = {10.1101/2019.12.21.886101},\n\tabstract = {{\\textless}h3{\\textgreater}Abstract{\\textless}/h3{\\textgreater} {\\textless}p{\\textgreater}The ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.{\\textless}/p{\\textgreater}{\\textless}h3{\\textgreater}Significance Statement{\\textless}/h3{\\textgreater} {\\textless}p{\\textgreater}How our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.{\\textless}/p{\\textgreater}},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tjournal = {bioRxiv},\n\tauthor = {Ni, Ruiye and Bender, David A. and {Barbour, D. L.}},\n\tmonth = dec,\n\tyear = {2019},\n\tnote = {Publisher: Cold Spring Harbor Laboratory\nSection: New Results},\n\tpages = {2019.12.21.886101},\n}\n\n
\n
\n\n\n
\n \\textlessh3\\textgreaterAbstract\\textless/h3\\textgreater \\textlessp\\textgreaterThe ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.\\textless/p\\textgreater\\textlessh3\\textgreaterSignificance Statement\\textless/h3\\textgreater \\textlessp\\textgreaterHow our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.\\textless/p\\textgreater\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Formal Idiographic Inference in Medicine.\n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n JAMA Otolaryngology-Head & Neck Surgery, 144(6): 467–468. 2018.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_formal_2018,\n\ttitle = {Formal {Idiographic} {Inference} in {Medicine}},\n\tvolume = {144},\n\tissn = {2168-619X},\n\tdoi = {10.1001/jamaoto.2018.0254},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {JAMA Otolaryngology-Head \\& Neck Surgery},\n\tauthor = {{Barbour, D. L.}},\n\tyear = {2018},\n\tpmid = {29801065},\n\tkeywords = {Decision Making, Diagnosis, Differential, Evidence-Based Medicine, Humans, Judgment, Philosophy, Medical, Precision Medicine},\n\tpages = {467--468},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Active Probabilistic Classification for Psychometric Field Estimation.\n \n \n \n \n\n\n \n Song, X. D.; Sukesan, K. A.; and Barbour, D. L.\n\n\n \n\n\n\n Attention, Perception & Psychophysics, 80(3): 798–812. April 2018.\n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{song_bayesian_2018,\n\ttitle = {Bayesian {Active} {Probabilistic} {Classification} for {Psychometric} {Field} {Estimation}},\n\tvolume = {80},\n\tissn = {1943-393X},\n\turl = {https://link.springer.com/article/10.3758%2Fs13414-017-1460-0},\n\tdoi = {10.3758/s13414-017-1460-0},\n\tabstract = {Psychometric functions are typically estimated by fitting a parametric model to categorical subject responses. Procedures to estimate unidimensional psychometric functions (i.e., psychometric curves) have been subjected to the most research, with modern adaptive methods capable of quickly obtaining accurate estimates. These capabilities have been extended to some multidimensional psychometric functions (i.e., psychometric fields) that are easily parameterizable, but flexible procedures for general psychometric field estimation are lacking. This study introduces a nonparametric Bayesian psychometric field estimator operating on subject queries sequentially selected to improve the estimate in some targeted way. This estimator implements probabilistic classification using Gaussian processes trained by active learning. The accuracy and efficiency of two different actively sampled estimators were compared to two non-actively sampled estimators for simulations of one of the simplest psychometric fields in common use: the pure-tone audiogram. The actively sampled methods achieved estimate accuracy equivalent to the non-actively sampled methods with fewer observations. This trend held for a variety of audiogram phenotypes representative of the range of human auditory perception. Gaussian process classification is a general estimation procedure capable of extending to multiple input variables and response classes. Its success with a two-dimensional psychometric field informed by binary subject responses holds great promise for extension to complex perceptual models currently inaccessible to practical estimation.},\n\tlanguage = {eng},\n\tnumber = {3},\n\tjournal = {Attention, Perception \\& Psychophysics},\n\tauthor = {Song, Xinyu D. and Sukesan, Kiron A. and {Barbour, D. L.}},\n\tmonth = apr,\n\tyear = {2018},\n\tpmid = {29256098},\n\tpmcid = {PMC5839980},\n\tkeywords = {Audition, Auditory Perception, Bayes Theorem, Hearing Tests, Humans, Models, Statistical, Normal Distribution, Psychoacoustics, Psychometrics, Psychometrics/testing},\n\tpages = {798--812},\n}\n\n
\n
\n\n\n
\n Psychometric functions are typically estimated by fitting a parametric model to categorical subject responses. Procedures to estimate unidimensional psychometric functions (i.e., psychometric curves) have been subjected to the most research, with modern adaptive methods capable of quickly obtaining accurate estimates. These capabilities have been extended to some multidimensional psychometric functions (i.e., psychometric fields) that are easily parameterizable, but flexible procedures for general psychometric field estimation are lacking. This study introduces a nonparametric Bayesian psychometric field estimator operating on subject queries sequentially selected to improve the estimate in some targeted way. This estimator implements probabilistic classification using Gaussian processes trained by active learning. The accuracy and efficiency of two different actively sampled estimators were compared to two non-actively sampled estimators for simulations of one of the simplest psychometric fields in common use: the pure-tone audiogram. The actively sampled methods achieved estimate accuracy equivalent to the non-actively sampled methods with fewer observations. This trend held for a variety of audiogram phenotypes representative of the range of human auditory perception. Gaussian process classification is a general estimation procedure capable of extending to multiple input variables and response classes. Its success with a two-dimensional psychometric field informed by binary subject responses holds great promise for extension to complex perceptual models currently inaccessible to practical estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Concurrent Bilateral Audiometric Inference.\n \n \n \n \n\n\n \n Heisey, K. L.; Buchbinder, J. M.; and Barbour, D. L.\n\n\n \n\n\n\n Acta Acustica united with Acustica, 104(5): 762–765. September 2018.\n \n\n\n\n
\n\n\n\n \n \n \"ConcurrentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{heisey_concurrent_2018,\n\ttitle = {Concurrent {Bilateral} {Audiometric} {Inference}},\n\tvolume = {104},\n\turl = {https://www.ingentaconnect.com/content/dav/aaua/2018/00000104/00000005/art00010;jsessionid=ae8rak4g4l0f4.x-ic-live-02},\n\tdoi = {10.3813/AAA.919218},\n\tabstract = {Conventional audiometric testing assesses hearing one ear at a time. Given that a person's two ears often share features in common both in health and disease, this shared variability could be exploited to improve the estimation process or the estimate itself. Here we introduce the active\nbilateral audiogram, which simultaneously estimates the hearing functions of both ears. We show in a cohort of normal-hearing and hearing-impaired listeners that the bilateral audiogram converges to its final estimates significantly faster than sequential active unilateral audiograms in a\nprocess termed conjoint psychoacoustic estimation. © 2018 The Author(s). Published by S. Hirzel Verlag · EAA. This is an open access article under the terms of the Creative Commons Attribution (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).},\n\tnumber = {5},\n\tjournal = {Acta Acustica united with Acustica},\n\tauthor = {Heisey, Katherine L. and Buchbinder, Jenna M. and {Barbour, D. L.}},\n\tmonth = sep,\n\tyear = {2018},\n\tpages = {762--765},\n}\n\n
\n
\n\n\n
\n Conventional audiometric testing assesses hearing one ear at a time. Given that a person's two ears often share features in common both in health and disease, this shared variability could be exploited to improve the estimation process or the estimate itself. Here we introduce the active bilateral audiogram, which simultaneously estimates the hearing functions of both ears. We show in a cohort of normal-hearing and hearing-impaired listeners that the bilateral audiogram converges to its final estimates significantly faster than sequential active unilateral audiograms in a process termed conjoint psychoacoustic estimation. © 2018 The Author(s). Published by S. Hirzel Verlag · EAA. This is an open access article under the terms of the Creative Commons Attribution (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n It’s Game Time.\n \n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n 2018.\n Archive Location: world Library Catalog: leader.pubs.asha.org Publisher: American Speech-Language-Hearing Association\n\n\n\n
\n\n\n\n \n \n \"It’sPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{barbour_d_l_its_2018,\n\ttype = {instructions},\n\ttitle = {It’s {Game} {Time}},\n\tcopyright = {© 2015 American Speech-Language-Hearing Association},\n\turl = {https://leader.pubs.asha.org/doi/full/10.1044/leader.APP.20062015.np},\n\tabstract = {The future of audiologic rehabilitation may be in the palm of your hand—in the form of new hearing-loss gaming apps for smartphones and tablets.},\n\tlanguage = {EN},\n\turldate = {2020-11-11},\n\tjournal = {The ASHA Leader},\n\tauthor = {{Barbour, D. L.}},\n\tyear = {2018},\n\tdoi = {10.1044/leader.APP.20062015.np},\n\tnote = {Archive Location: world\nLibrary Catalog: leader.pubs.asha.org\nPublisher: American Speech-Language-Hearing Association},\n}\n\n
\n
\n\n\n
\n The future of audiologic rehabilitation may be in the palm of your hand—in the form of new hearing-loss gaming apps for smartphones and tablets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n In Vitro Assay for the Detection of Network Connectivity in Embryonic Stem Cell-Derived Cultures.\n \n \n \n \n\n\n \n Gamble, J. R.; Zhang, E. T.; Iyer, N.; Sakiyama-Elbert, S.; and Barbour, D. L.\n\n\n \n\n\n\n bioRxiv,377689. July 2018.\n Publisher: Cold Spring Harbor Laboratory Section: New Results\n\n\n\n
\n\n\n\n \n \n \"InPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{gamble_vitro_2018,\n\ttitle = {In {Vitro} {Assay} for the {Detection} of {Network} {Connectivity} in {Embryonic} {Stem} {Cell}-{Derived} {Cultures}},\n\tcopyright = {© 2018, Posted by Cold Spring Harbor Laboratory. The copyright holder for this pre-print is the author. All rights reserved. The material may not be redistributed, re-used or adapted without the author's permission.},\n\turl = {https://www.biorxiv.org/content/10.1101/377689v1},\n\tdoi = {10.1101/377689},\n\tabstract = {{\\textless}h3{\\textgreater}ABSTRACT{\\textless}/h3{\\textgreater} {\\textless}p{\\textgreater}Stem cell transplantation holds great promise as a repair strategy following spinal cord injury. Embryonic stem cell (ESC) transplantation therapies have elicited encouraging though limited improvement in motor and sensory function with the use of heterogeneous mixtures of spinal cord neural progenitors and ESCs. Recently, transgenic lines of ESCs have been developed to allow for purification of specific candidate populations prior to transplantation, but the functional network connectivity of these populations and its relationship to recovery is difficult to examine with current technological limitations. In this study, we combine an ESC differentiation protocol, multi-electrode arrays (MEAs), and previously developed neuronal connectivity detection algorithms to develop an \\textit{in vitro} high-throughput assay of network connectivity in ESC-derived populations of neurons. Neuronal aggregation results in more consistent detection of individual neuronal activity than dissociated cultures. Both aggregated and dissociated culture types exhibited synchronized bursting behaviors at days 17 and 18 on MEAs, and thousands of statistically significance functional connections were detected in both culture types. Aggregate cultures, however, demonstrate a tight linear relationship between the inter-neuron distance of neuronal pairs and the time delay of the neuronal pair functional connections, whereas dissociated cultures do not. These results suggest that ESC-derived aggregated cultures may reflect some of the spatiotemporal connectivity characteristics of \\textit{in vivo} tissue and prove to be useful models of investigating potentially therapeutic populations of ESC-derived neurons \\textit{in vitro}.{\\textless}/p{\\textgreater}{\\textless}h3{\\textgreater}NOVELTY AND SIGNIFICANCE{\\textless}/h3{\\textgreater} {\\textless}p{\\textgreater}Previous investigations of stem cell-derived network connectivity on multi-electrode arrays (MEAs) have been limited to characterizations of bursting activity or broad averages of overall temporal network correlations, both of which overlook neuronal level interactions. The use of spike-sorting and short-time cross-correlation histograms along with statistical techniques developed specifically for MEAs allows for the characterization of functional connections between individual stem cell-derived neurons. This high-throughput connectivity assay will open doors for future examinations of the differences in functional network formation between various candidate stem cell-derived populations for spinal cord injury transplantation therapies—a critical inquiry into their therapeutic viability.{\\textless}/p{\\textgreater}},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tjournal = {bioRxiv},\n\tauthor = {Gamble, Jeffrey R. and Zhang, Eric T. and Iyer, Nisha and Sakiyama-Elbert, Shelly and {Barbour, D. L.}},\n\tmonth = jul,\n\tyear = {2018},\n\tnote = {Publisher: Cold Spring Harbor Laboratory\nSection: New Results},\n\tpages = {377689},\n}\n\n
\n
\n\n\n
\n \\textlessh3\\textgreaterABSTRACT\\textless/h3\\textgreater \\textlessp\\textgreaterStem cell transplantation holds great promise as a repair strategy following spinal cord injury. Embryonic stem cell (ESC) transplantation therapies have elicited encouraging though limited improvement in motor and sensory function with the use of heterogeneous mixtures of spinal cord neural progenitors and ESCs. Recently, transgenic lines of ESCs have been developed to allow for purification of specific candidate populations prior to transplantation, but the functional network connectivity of these populations and its relationship to recovery is difficult to examine with current technological limitations. In this study, we combine an ESC differentiation protocol, multi-electrode arrays (MEAs), and previously developed neuronal connectivity detection algorithms to develop an in vitro high-throughput assay of network connectivity in ESC-derived populations of neurons. Neuronal aggregation results in more consistent detection of individual neuronal activity than dissociated cultures. Both aggregated and dissociated culture types exhibited synchronized bursting behaviors at days 17 and 18 on MEAs, and thousands of statistically significance functional connections were detected in both culture types. Aggregate cultures, however, demonstrate a tight linear relationship between the inter-neuron distance of neuronal pairs and the time delay of the neuronal pair functional connections, whereas dissociated cultures do not. These results suggest that ESC-derived aggregated cultures may reflect some of the spatiotemporal connectivity characteristics of in vivo tissue and prove to be useful models of investigating potentially therapeutic populations of ESC-derived neurons in vitro.\\textless/p\\textgreater\\textlessh3\\textgreaterNOVELTY AND SIGNIFICANCE\\textless/h3\\textgreater \\textlessp\\textgreaterPrevious investigations of stem cell-derived network connectivity on multi-electrode arrays (MEAs) have been limited to characterizations of bursting activity or broad averages of overall temporal network correlations, both of which overlook neuronal level interactions. The use of spike-sorting and short-time cross-correlation histograms along with statistical techniques developed specifically for MEAs allows for the characterization of functional connections between individual stem cell-derived neurons. This high-throughput connectivity assay will open doors for future examinations of the differences in functional network formation between various candidate stem cell-derived populations for spinal cord injury transplantation therapies—a critical inquiry into their therapeutic viability.\\textless/p\\textgreater\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Psychometric Function Estimation by Probabilistic Classification.\n \n \n \n \n\n\n \n Song, X. D.; Garnett, R.; and Barbour, D. L.\n\n\n \n\n\n\n The Journal of the Acoustical Society of America, 141(4): 2513. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"PsychometricPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{song_psychometric_2017,\n\ttitle = {Psychometric {Function} {Estimation} by {Probabilistic} {Classification}},\n\tvolume = {141},\n\tissn = {1520-8524},\n\turl = {https://asa.scitation.org/doi/abs/10.1121/1.4979594?af=R&},\n\tdoi = {10.1121/1.4979594},\n\tabstract = {Conventional psychometric function (PF) estimation involves fitting a parametric, unidimensional sigmoid to binary subject responses, which is not readily extendible to higher order PFs. This study presents a nonparametric, Bayesian, multidimensional PF estimator that also relies upon traditional binary subject responses. This technique is built upon probabilistic classification (PC), which attempts to ascertain the subdomains corresponding to each subject response as a function of multiple independent variables. Increased uncertainty in the location of class boundaries results in a greater spread in the PF estimate, which is similar to a parametric PF estimate with a lower slope. PC was evaluated on both one-dimensional (1D) and two-dimensional (2D) simulated auditory PFs across a variety of function shapes and sample numbers. In the 1D case, PC demonstrated equivalent performance to conventional maximum likelihood regression for the same number of simulated responses. In the 2D case, where the responses were distributed across two independent variables, PC accuracy closely matched the accuracy of 1D maximum likelihood estimation at discrete values of the second variable. The flexibility and scalability of the PC formulation make this an excellent option for estimating traditional PFs as well as more complex PFs, which have traditionally lacked rigorous estimation procedures.},\n\tlanguage = {eng},\n\tnumber = {4},\n\tjournal = {The Journal of the Acoustical Society of America},\n\tauthor = {Song, Xinyu D. and Garnett, Roman and {Barbour, D. L.}},\n\tyear = {2017},\n\tpmid = {28464646},\n\tkeywords = {Acoustic Stimulation, Auditory Perception, Bayes Theorem, Computer Simulation, Humans, Probability, Psychometrics, Reaction Time, Stochastic Processes, Time Factors},\n\tpages = {2513},\n}\n\n
\n
\n\n\n
\n Conventional psychometric function (PF) estimation involves fitting a parametric, unidimensional sigmoid to binary subject responses, which is not readily extendible to higher order PFs. This study presents a nonparametric, Bayesian, multidimensional PF estimator that also relies upon traditional binary subject responses. This technique is built upon probabilistic classification (PC), which attempts to ascertain the subdomains corresponding to each subject response as a function of multiple independent variables. Increased uncertainty in the location of class boundaries results in a greater spread in the PF estimate, which is similar to a parametric PF estimate with a lower slope. PC was evaluated on both one-dimensional (1D) and two-dimensional (2D) simulated auditory PFs across a variety of function shapes and sample numbers. In the 1D case, PC demonstrated equivalent performance to conventional maximum likelihood regression for the same number of simulated responses. In the 2D case, where the responses were distributed across two independent variables, PC accuracy closely matched the accuracy of 1D maximum likelihood estimation at discrete values of the second variable. The flexibility and scalability of the PC formulation make this an excellent option for estimating traditional PFs as well as more complex PFs, which have traditionally lacked rigorous estimation procedures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decoding Sound Level in the Marmoset Primary Auditory Cortex.\n \n \n \n \n\n\n \n Sun, W.; Marongelli, E. N.; Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Neurophysiology, 118(4): 2024–2033. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"DecodingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{sun_decoding_2017,\n\ttitle = {Decoding {Sound} {Level} in the {Marmoset} {Primary} {Auditory} {Cortex}},\n\tvolume = {118},\n\tissn = {1522-1598},\n\turl = {https://journals.physiology.org/doi/full/10.1152/jn.00670.2016},\n\tdoi = {10.1152/jn.00670.2016},\n\tabstract = {Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons.NEW \\& NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts.},\n\tlanguage = {eng},\n\tnumber = {4},\n\tjournal = {Journal of Neurophysiology},\n\tauthor = {Sun, Wensheng and Marongelli, Ellisha N. and Watkins, Paul V. and {Barbour, D. L.}},\n\tyear = {2017},\n\tpmid = {28701545},\n\tpmcid = {PMC5626894},\n\tkeywords = {Animals, Auditory Cortex, Auditory Perception, Callithrix, Neurons, auditory cortex, neural coding, nonmonotonic, primate, sound pressure level encoding},\n\tpages = {2024--2033},\n}\n\n
\n
\n\n\n
\n Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons.NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate, not Selectivity, Determines Neuronal Population Coding Accuracy in Auditory Cortex.\n \n \n \n \n\n\n \n Sun, W.; and Barbour, D. L.\n\n\n \n\n\n\n PLoS Biology, 15(11): e2002459. November 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Rate,Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{sun_rate_2017,\n\ttitle = {Rate, not {Selectivity}, {Determines} {Neuronal} {Population} {Coding} {Accuracy} in {Auditory} {Cortex}},\n\tvolume = {15},\n\tissn = {1545-7885},\n\turl = {https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2002459},\n\tdoi = {10.1371/journal.pbio.2002459},\n\tabstract = {The notion that neurons with higher selectivity carry more information about external sensory inputs is widely accepted in neuroscience. High-selectivity neurons respond to a narrow range of sensory inputs, and thus would be considered highly informative by rejecting a large proportion of possible inputs. In auditory cortex, neuronal responses are less selective immediately after the onset of a sound and then become highly selective in the following sustained response epoch. These 2 temporal response epochs have thus been interpreted to encode first the presence and then the content of a sound input. Contrary to predictions from that prevailing theory, however, we found that the neural population conveys similar information about sound input across the 2 epochs in spite of the neuronal selectivity differences. The amount of information encoded turns out to be almost completely dependent upon the total number of population spikes in the read-out window for this system. Moreover, inhomogeneous Poisson spiking behavior is sufficient to account for this property. These results imply a novel principle of sensory encoding that is potentially shared widely among multiple sensory systems.},\n\tlanguage = {eng},\n\tnumber = {11},\n\tjournal = {PLoS Biology},\n\tauthor = {Sun, Wensheng and {Barbour, D. L.}},\n\tmonth = nov,\n\tyear = {2017},\n\tpmid = {29091725},\n\tpmcid = {PMC5683657},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Animals, Auditory Cortex, Auditory Pathways, Auditory Perception, Callithrix, Neurons, Sound Localization},\n\tpages = {e2002459},\n}\n\n
\n
\n\n\n
\n The notion that neurons with higher selectivity carry more information about external sensory inputs is widely accepted in neuroscience. High-selectivity neurons respond to a narrow range of sensory inputs, and thus would be considered highly informative by rejecting a large proportion of possible inputs. In auditory cortex, neuronal responses are less selective immediately after the onset of a sound and then become highly selective in the following sustained response epoch. These 2 temporal response epochs have thus been interpreted to encode first the presence and then the content of a sound input. Contrary to predictions from that prevailing theory, however, we found that the neural population conveys similar information about sound input across the 2 epochs in spite of the neuronal selectivity differences. The amount of information encoded turns out to be almost completely dependent upon the total number of population spikes in the read-out window for this system. Moreover, inhomogeneous Poisson spiking behavior is sufficient to account for this property. These results imply a novel principle of sensory encoding that is potentially shared widely among multiple sensory systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Engaging and Disengaging Recurrent Inhibition Coincides with Sensing and Unsensing of a Sensory Stimulus.\n \n \n \n \n\n\n \n Saha, D.; Sun, W.; Li, C.; Nizampatnam, S.; Padovano, W.; Chen, Z.; Chen, A.; Altan, E.; Lo, R.; Barbour, D. L.; and Raman, B.\n\n\n \n\n\n\n Nature Communications, 8: 15413. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"EngagingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{saha_engaging_2017,\n\ttitle = {Engaging and {Disengaging} {Recurrent} {Inhibition} {Coincides} with {Sensing} and {Unsensing} of a {Sensory} {Stimulus}},\n\tvolume = {8},\n\tissn = {2041-1723},\n\turl = {https://www.nature.com/articles/ncomms15413},\n\tdoi = {10.1038/ncomms15413},\n\tabstract = {Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus 'recognition' and 'derecognition'.},\n\tlanguage = {eng},\n\tjournal = {Nature Communications},\n\tauthor = {Saha, Debajit and Sun, Wensheng and Li, Chao and Nizampatnam, Srinath and Padovano, William and Chen, Zhengdao and Chen, Alex and Altan, Ege and Lo, Ray and {Barbour, D. L.} and Raman, Baranidharan},\n\tyear = {2017},\n\tpmid = {28534502},\n\tpmcid = {PMC5457525},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Algorithms, Animals, Auditory Cortex, Behavior, Animal, Callithrix, Computer Simulation, Female, Grasshoppers, Male, Models, Neurological, Models, Statistical, Neural Inhibition, Neurons, Normal Distribution, Odorants, Olfactory Pathways, Probability, Smell, Video Recording},\n\tpages = {15413},\n}\n\n
\n
\n\n\n
\n Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus 'recognition' and 'derecognition'.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contextual Effects of Noise on Vocalization Encoding in Primary Auditory Cortex.\n \n \n \n \n\n\n \n Ni, R.; Bender, D. A.; Shanechi, A. M.; Gamble, J. R.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Neurophysiology, 117(2): 713–727. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"ContextualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{ni_contextual_2017,\n\ttitle = {Contextual {Effects} of {Noise} on {Vocalization} {Encoding} in {Primary} {Auditory} {Cortex}},\n\tvolume = {117},\n\tissn = {1522-1598},\n\turl = {https://journals.physiology.org/doi/full/10.1152/jn.00476.2016},\n\tdoi = {10.1152/jn.00476.2016},\n\tabstract = {Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons.\nNEW \\& NOTEWORTHY: The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population.},\n\tlanguage = {eng},\n\tnumber = {2},\n\tjournal = {Journal of Neurophysiology},\n\tauthor = {Ni, Ruiye and Bender, David A. and Shanechi, Amirali M. and Gamble, Jeffrey R. and {Barbour, D. L.}},\n\tyear = {2017},\n\tpmid = {27881720},\n\tpmcid = {PMC5296407},\n\tkeywords = {Acoustic Stimulation, Acoustics, Action Potentials, Animals, Auditory Cortex, Auditory Perception, Callithrix, Female, Neurons, Noise, Vocalization, Animal, noise interference, primary auditory cortex, signal-to-noise ratio, single unit, vocalizations},\n\tpages = {713--727},\n}\n\n
\n
\n\n\n
\n Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. NEW & NOTEWORTHY: The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advanced Inferential Medicine℠.\n \n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n Technical Report OSF Preprints, December 2017.\n \n\n\n\n
\n\n\n\n \n \n \"AdvancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@techreport{barbour_d_l_advanced_2017,\n\ttitle = {Advanced {Inferential} {Medicine}℠},\n\turl = {https://osf.io/4nmkv/},\n\tabstract = {Traditional medical inference requires explicit determination of a patient’s ailment prior to deciding the best treatment option. Evidence-based medicine informs these decisions from the outcomes of previous patients with similar ailments who received similar treatments. Precision medicine seeks to expand this nomothetic framework with many more potential diagnoses, the total possible number of which is inherently limited by the number of similar previous patients available for reference. The concept of making patient-care decisions using idiographic information unique to a particular patient may be a familiar concept to practicing clinicians, but it has no formal role within evidence-based medicine. The collective result is constrained inferential capacity of the dominant medical philosophy, leading to limited effectiveness of individual patient treatment decisions. A means of combining nomothetic and idiographic inference to optimize individual treatment outcomes would be a welcome addition to the modern medical armamentarium. Novel idiographic search algorithms informed by nomothetic prior beliefs can construct predictive models about individual patients that provide rigorous clinical decision support. This advanced medical inference framework exploits modern machine learning to generalize the concept of diagnosis and to make effective treatment decisions with or without definitive etiological understanding of a patient’s ailment.},\n\turldate = {2020-11-11},\n\tinstitution = {OSF Preprints},\n\tauthor = {{Barbour, D. L.}},\n\tmonth = dec,\n\tyear = {2017},\n\tdoi = {10.31219/osf.io/4nmkv},\n\tkeywords = {Analytical, Diagnostic and Therapeutic Techniques and Equipment, Diagnostics, Investigative Techniques, Machine Learning, Medical Inference, Medical Philosophy, Medicine, Medicine and Health Sciences, Other Analytical, Patient Workup, Search Algorithm},\n}\n\n
\n
\n\n\n
\n Traditional medical inference requires explicit determination of a patient’s ailment prior to deciding the best treatment option. Evidence-based medicine informs these decisions from the outcomes of previous patients with similar ailments who received similar treatments. Precision medicine seeks to expand this nomothetic framework with many more potential diagnoses, the total possible number of which is inherently limited by the number of similar previous patients available for reference. The concept of making patient-care decisions using idiographic information unique to a particular patient may be a familiar concept to practicing clinicians, but it has no formal role within evidence-based medicine. The collective result is constrained inferential capacity of the dominant medical philosophy, leading to limited effectiveness of individual patient treatment decisions. A means of combining nomothetic and idiographic inference to optimize individual treatment outcomes would be a welcome addition to the modern medical armamentarium. Novel idiographic search algorithms informed by nomothetic prior beliefs can construct predictive models about individual patients that provide rigorous clinical decision support. This advanced medical inference framework exploits modern machine learning to generalize the concept of diagnosis and to make effective treatment decisions with or without definitive etiological understanding of a patient’s ailment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Electrophysiological Sequelae of Hemispherotomy in Ipsilateral Human Cortex.\n \n \n \n\n\n \n Hawasli, A. H.; Chacko, R.; Szrama, N. P.; Bundy, D. T.; Pahwa, M.; Yarbrough, C. K.; Dlouhy, B. J.; Limbrick, D. D.; Barbour, D. L.; Smyth, M. D.; and Leuthardt, E. C.\n\n\n \n\n\n\n Frontiers in Human Neuroscience, 11: 149. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{hawasli_electrophysiological_2017,\n\ttitle = {Electrophysiological {Sequelae} of {Hemispherotomy} in {Ipsilateral} {Human} {Cortex}},\n\tvolume = {11},\n\tissn = {1662-5161},\n\tdoi = {10.3389/fnhum.2017.00149},\n\tabstract = {Objectives: Hemispheric disconnection has been used as a treatment of medically refractory epilepsy and evolved from anatomic hemispherectomy to functional hemispherectomies to hemispherotomies. The hemispherotomy procedure involves disconnection of an entire hemisphere with limited tissue resection and is reserved for medically-refractory epilepsy due to diffuse hemispheric disease. Although it is thought to be effective by preventing seizures from spreading to the contralateral hemisphere, the electrophysiological effects of a hemispherotomy on the ipsilateral hemisphere remain poorly defined. The objective of this study was to evaluate the effects of hemispherotomy on the electrophysiologic dynamics in peri-stroke and dysplastic cortex. Methods: Intraoperative electrocorticography (ECoG) was recorded from ipsilateral cortex in 5 human subjects with refractory epilepsy before and after hemispherotomy. Power spectral density, mutual information, and phase-amplitude coupling were measured from the ECoG signals. Results: Epilepsy was a result of remote perinatal stroke in three of the subjects. In two of the subjects, seizures were a consequence of dysplastic tissue: one with hemimegalencephaly and the second with Rasmussen's encephalitis. Hemispherotomy reduced broad-band power spectral density in peri-stroke cortex. Meanwhile, hemispherotomy increased power in the low and high frequency bands for dysplastic cortex. Functional connectivity was increased in lower frequency bands in peri-stroke tissue but not affected in dysplastic tissue after hemispherotomy. Finally, hemispherotomy reduced band-specific phase-amplitude coupling in peristroke cortex but not dysplastic cortex. Significance: Disconnecting deep subcortical connections to peri-stroke cortex via a hemispherotomy attenuates power of oscillations and impairs the transfer of information from large-scale distributed brain networks to the local cortex. Hence, hemispherotomy reduces heterogeneity between neighboring cortex while impairing phase-amplitude coupling. In contrast, dysfunctional networks in dysplastic cortex lack the normal connectivity with distant networks. Therefore hemispherotomy does not produce the same effects.},\n\tlanguage = {eng},\n\tjournal = {Frontiers in Human Neuroscience},\n\tauthor = {Hawasli, Ammar H. and Chacko, Ravi and Szrama, Nicholas P. and Bundy, David T. and Pahwa, Mrinal and Yarbrough, Chester K. and Dlouhy, Brian J. and Limbrick, David D. and {Barbour, D. L.} and Smyth, Matthew D. and Leuthardt, Eric C.},\n\tyear = {2017},\n\tpmid = {28424599},\n\tpmcid = {PMC5371676},\n\tkeywords = {cortical physiology, electrocorticography, epilepsy, hemispherotomy, oscillations},\n\tpages = {149},\n}\n\n
\n
\n\n\n
\n Objectives: Hemispheric disconnection has been used as a treatment of medically refractory epilepsy and evolved from anatomic hemispherectomy to functional hemispherectomies to hemispherotomies. The hemispherotomy procedure involves disconnection of an entire hemisphere with limited tissue resection and is reserved for medically-refractory epilepsy due to diffuse hemispheric disease. Although it is thought to be effective by preventing seizures from spreading to the contralateral hemisphere, the electrophysiological effects of a hemispherotomy on the ipsilateral hemisphere remain poorly defined. The objective of this study was to evaluate the effects of hemispherotomy on the electrophysiologic dynamics in peri-stroke and dysplastic cortex. Methods: Intraoperative electrocorticography (ECoG) was recorded from ipsilateral cortex in 5 human subjects with refractory epilepsy before and after hemispherotomy. Power spectral density, mutual information, and phase-amplitude coupling were measured from the ECoG signals. Results: Epilepsy was a result of remote perinatal stroke in three of the subjects. In two of the subjects, seizures were a consequence of dysplastic tissue: one with hemimegalencephaly and the second with Rasmussen's encephalitis. Hemispherotomy reduced broad-band power spectral density in peri-stroke cortex. Meanwhile, hemispherotomy increased power in the low and high frequency bands for dysplastic cortex. Functional connectivity was increased in lower frequency bands in peri-stroke tissue but not affected in dysplastic tissue after hemispherotomy. Finally, hemispherotomy reduced band-specific phase-amplitude coupling in peristroke cortex but not dysplastic cortex. Significance: Disconnecting deep subcortical connections to peri-stroke cortex via a hemispherotomy attenuates power of oscillations and impairs the transfer of information from large-scale distributed brain networks to the local cortex. Hence, hemispherotomy reduces heterogeneity between neighboring cortex while impairing phase-amplitude coupling. In contrast, dysfunctional networks in dysplastic cortex lack the normal connectivity with distant networks. Therefore hemispherotomy does not produce the same effects.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Auditory Contributions to Maintaining Balance.\n \n \n \n\n\n \n Stevens, M. N.; Barbour, D. L.; Gronski, M. P.; and Hullar, T. E.\n\n\n \n\n\n\n Journal of Vestibular Research: Equilibrium & Orientation, 26(5-6): 433–438. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{stevens_auditory_2016,\n\ttitle = {Auditory {Contributions} to {Maintaining} {Balance}},\n\tvolume = {26},\n\tissn = {1878-6464},\n\tdoi = {10.3233/VES-160599},\n\tabstract = {Maintaining balance relies on integration of inputs from the visual, vestibular, and proprioceptive systems. The auditory system has not been credited with a similar contributory role, despite its ability to provide spatial orienting cues with extreme speed and accuracy. Here, we determined the ability of external auditory signals to reduce postural sway, measured as the root-mean-square velocity of center of pressure of a standing subject, in a series of subjects with varying levels of imbalance standing in the dark. The maximum root-mean-square center of pressure among our subjects decreased from 7.0 cm/sec in silence to 4.7 cm/sec.with the addition of external sound. The addition of sound allowed subjects to decrease sway by 41 percent. The amount of improvement due to sound was 54\\% of the amount of improvement observed in postural sway when visual cues only were provided to subjects standing in silence. These data support the significant effect of the auditory system in providing balance-related cues and suggest that interventions such as hearing aids or cochlear implants may be useful in improving postural stability and reducing falls.},\n\tlanguage = {eng},\n\tnumber = {5-6},\n\tjournal = {Journal of Vestibular Research: Equilibrium \\& Orientation},\n\tauthor = {Stevens, Madelyn N. and {Barbour, D. L.} and Gronski, Meredith P. and Hullar, Timothy E.},\n\tyear = {2016},\n\tpmid = {28262648},\n\tkeywords = {Adolescent, Adult, Aged, Auditory Perception, Balance, Child, Cues, Female, Hearing Tests, Humans, Male, Middle Aged, Photic Stimulation, Postural Balance, Proprioception, Vestibular Diseases, Vestibular Function Tests, Young Adult, audition, hearing, posturography, proprioception, stability, sway, vestibular},\n\tpages = {433--438},\n}\n\n
\n
\n\n\n
\n Maintaining balance relies on integration of inputs from the visual, vestibular, and proprioceptive systems. The auditory system has not been credited with a similar contributory role, despite its ability to provide spatial orienting cues with extreme speed and accuracy. Here, we determined the ability of external auditory signals to reduce postural sway, measured as the root-mean-square velocity of center of pressure of a standing subject, in a series of subjects with varying levels of imbalance standing in the dark. The maximum root-mean-square center of pressure among our subjects decreased from 7.0 cm/sec in silence to 4.7 cm/sec.with the addition of external sound. The addition of sound allowed subjects to decrease sway by 41 percent. The amount of improvement due to sound was 54% of the amount of improvement observed in postural sway when visual cues only were provided to subjects standing in silence. These data support the significant effect of the auditory system in providing balance-related cues and suggest that interventions such as hearing aids or cochlear implants may be useful in improving postural stability and reducing falls.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spontaneous Activity is Correlated with Coding Density in Primary Auditory Cortex.\n \n \n \n \n\n\n \n Bender, D. A.; Ni, R.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Neurophysiology, 116(6): 2789–2798. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"SpontaneousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{bender_spontaneous_2016,\n\ttitle = {Spontaneous {Activity} is {Correlated} with {Coding} {Density} in {Primary} {Auditory} {Cortex}},\n\tvolume = {116},\n\tissn = {1522-1598},\n\turl = {https://journals.physiology.org/doi/full/10.1152/jn.00474.2016},\n\tdoi = {10.1152/jn.00474.2016},\n\tabstract = {Sensory neurons across sensory modalities and specific processing areas have diverse levels of spontaneous firing rates (SFRs) in the absence of sensory stimuli. However, the functional significance of this spontaneous activity is not well-understood. Previous studies in the auditory system have demonstrated that different levels of spontaneous activity are correlated with a variety of physiological and anatomic properties, suggesting that neurons with differing SFRs make unique contributions to the encoding of auditory stimuli. Additionally, altered SFRs are a correlate of tinnitus, arising in several auditory areas after exposure to ototoxic substances and noise trauma. In this study, we recorded single-unit activity from primary auditory cortex of awake marmoset monkeys while delivering wide-band random-spectrum stimuli and white Gaussian noise (WGN) to examine any divergences in stimulus encoding properties across SFR classes. We found that higher levels of spontaneous activity were associated with both higher levels of activation relative to suppression across a variety of wide-band stimuli and higher driven rates in response to WGN. Moreover, response latencies to WGN were negatively correlated with the level of activation in response to both stimulus types. These findings are consistent with a novel view of the role spontaneous spiking may play during normal stimulus processing in primary auditory cortex and how it may malfunction in cases of tinnitus.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {Journal of Neurophysiology},\n\tauthor = {Bender, David A. and Ni, Ruiye and {Barbour, D. L.}},\n\tyear = {2016},\n\tpmid = {27707812},\n\tpmcid = {PMC5155035},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Animals, Auditory Cortex, Callithrix, Noise, Normal Distribution, Reaction Time, Sensory Receptor Cells, Statistics, Nonparametric, Wakefulness, marmoset monkey, primary auditory cortex, single-unit recording, sparse coding, spontaneous activity, tinnitus},\n\tpages = {2789--2798},\n}\n\n
\n
\n\n\n
\n Sensory neurons across sensory modalities and specific processing areas have diverse levels of spontaneous firing rates (SFRs) in the absence of sensory stimuli. However, the functional significance of this spontaneous activity is not well-understood. Previous studies in the auditory system have demonstrated that different levels of spontaneous activity are correlated with a variety of physiological and anatomic properties, suggesting that neurons with differing SFRs make unique contributions to the encoding of auditory stimuli. Additionally, altered SFRs are a correlate of tinnitus, arising in several auditory areas after exposure to ototoxic substances and noise trauma. In this study, we recorded single-unit activity from primary auditory cortex of awake marmoset monkeys while delivering wide-band random-spectrum stimuli and white Gaussian noise (WGN) to examine any divergences in stimulus encoding properties across SFR classes. We found that higher levels of spontaneous activity were associated with both higher levels of activation relative to suppression across a variety of wide-band stimuli and higher driven rates in response to WGN. Moreover, response latencies to WGN were negatively correlated with the level of activation in response to both stimulus types. These findings are consistent with a novel view of the role spontaneous spiking may play during normal stimulus processing in primary auditory cortex and how it may malfunction in cases of tinnitus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Influence of White and Gray Matter Connections on Endogenous Human Cortical Oscillations.\n \n \n \n\n\n \n Hawasli, A. H.; Kim, D.; Ledbetter, N. M.; Dahiya, S.; Barbour, D. L.; and Leuthardt, E. C.\n\n\n \n\n\n\n Frontiers in Human Neuroscience, 10: 330. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{hawasli_influence_2016,\n\ttitle = {Influence of {White} and {Gray} {Matter} {Connections} on {Endogenous} {Human} {Cortical} {Oscillations}},\n\tvolume = {10},\n\tissn = {1662-5161},\n\tdoi = {10.3389/fnhum.2016.00330},\n\tabstract = {Brain oscillations reflect changes in electrical potentials summated across neuronal populations. Low- and high-frequency rhythms have different modulation patterns. Slower rhythms are spatially broad, while faster rhythms are more local. From this observation, we hypothesized that low- and high-frequency oscillations reflect white- and gray-matter communications, respectively, and synchronization between low-frequency phase with high-frequency amplitude represents a mechanism enabling distributed brain-networks to coordinate local processing. Testing this common understanding, we selectively disrupted white or gray matter connections to human cortex while recording surface field potentials. Counter to our original hypotheses, we found that cortex consists of independent oscillatory-units (IOUs) that maintain their own complex endogenous rhythm structure. IOUs are differentially modulated by white and gray matter connections. White-matter connections maintain topographical anatomic heterogeneity (i.e., separable processing in cortical space) and gray-matter connections segregate cortical synchronization patterns (i.e., separable temporal processing through phase-power coupling). Modulation of distinct oscillatory modules enables the functional diversity necessary for complex processing in the human brain.},\n\tlanguage = {eng},\n\tjournal = {Frontiers in Human Neuroscience},\n\tauthor = {Hawasli, Ammar H. and Kim, DoHyun and Ledbetter, Noah M. and Dahiya, Sonika and {Barbour, D. L.} and Leuthardt, Eric C.},\n\tyear = {2016},\n\tpmid = {27445767},\n\tpmcid = {PMC4923146},\n\tkeywords = {cortical oscillations, cortical physiology, electrocorticography, human neuroscience, neurophysiology},\n\tpages = {330},\n}\n\n
\n
\n\n\n
\n Brain oscillations reflect changes in electrical potentials summated across neuronal populations. Low- and high-frequency rhythms have different modulation patterns. Slower rhythms are spatially broad, while faster rhythms are more local. From this observation, we hypothesized that low- and high-frequency oscillations reflect white- and gray-matter communications, respectively, and synchronization between low-frequency phase with high-frequency amplitude represents a mechanism enabling distributed brain-networks to coordinate local processing. Testing this common understanding, we selectively disrupted white or gray matter connections to human cortex while recording surface field potentials. Counter to our original hypotheses, we found that cortex consists of independent oscillatory-units (IOUs) that maintain their own complex endogenous rhythm structure. IOUs are differentially modulated by white and gray matter connections. White-matter connections maintain topographical anatomic heterogeneity (i.e., separable processing in cortical space) and gray-matter connections segregate cortical synchronization patterns (i.e., separable temporal processing through phase-power coupling). Modulation of distinct oscillatory modules enables the functional diversity necessary for complex processing in the human brain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Effects of Auditory Contrast Tuning upon Speech Intelligibility.\n \n \n \n\n\n \n Killian, N. J.; Watkins, P. V.; Davidson, L. S.; and Barbour, D. L.\n\n\n \n\n\n\n Frontiers in Psychology, 7: 1145. 2016.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{killian_effects_2016,\n\ttitle = {The {Effects} of {Auditory} {Contrast} {Tuning} upon {Speech} {Intelligibility}},\n\tvolume = {7},\n\tissn = {1664-1078},\n\tdoi = {10.3389/fpsyg.2016.01145},\n\tabstract = {We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.},\n\tlanguage = {eng},\n\tjournal = {Frontiers in Psychology},\n\tauthor = {Killian, Nathan J. and Watkins, Paul V. and Davidson, Lisa S. and {Barbour, D. L.}},\n\tyear = {2016},\n\tpmid = {27555826},\n\tpmcid = {PMC4977316},\n\tkeywords = {auditory cortex, cochlear implant, human, noise reduction, primate},\n\tpages = {1145},\n}\n\n
\n
\n\n\n
\n We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Machine Learning AudioGram.\n \n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n 2015.\n \n\n\n\n
\n\n\n\n \n \n \"MachinePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{barbour_d_l_machine_2015,\n\ttitle = {Machine {Learning} {AudioGram}},\n\turl = {beta.bonauria.com},\n\tpublisher = {Bonauria},\n\tauthor = {{Barbour, D. L.}},\n\tyear = {2015},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast, Continuous Audiogram Estimation Using Machine Learning.\n \n \n \n \n\n\n \n Song, X. D.; Wallace, B. M.; Gardner, J. R.; Ledbetter, N. M.; Weinberger, K. Q.; and Barbour, D. L.\n\n\n \n\n\n\n Ear and Hearing, 36(6): e326–335. December 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Fast,Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{song_fast_2015,\n\ttitle = {Fast, {Continuous} {Audiogram} {Estimation} {Using} {Machine} {Learning}},\n\tvolume = {36},\n\tissn = {1538-4667},\n\turl = {https://pubmed.ncbi.nlm.nih.gov/26258575/},\n\tdoi = {10.1097/AUD.0000000000000186},\n\tabstract = {OBJECTIVES: Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study was to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique.\nDESIGN: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and one repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz).\nRESULTS: The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably with those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency.\nCONCLUSIONS: The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {Ear and Hearing},\n\tauthor = {Song, Xinyu D. and Wallace, Brittany M. and Gardner, Jacob R. and Ledbetter, Noah M. and Weinberger, Kilian Q. and {Barbour, D. L.}},\n\tmonth = dec,\n\tyear = {2015},\n\tpmid = {26258575},\n\tpmcid = {PMC4709018},\n\tkeywords = {Adolescent, Adult, Aged, Aged, 80 and over, Audiometry, Pure-Tone, Bayes Theorem, Female, Hearing Loss, Humans, Machine Learning, Male, Middle Aged, Reproducibility of Results, Young Adult},\n\tpages = {e326--335},\n}\n\n
\n
\n\n\n
\n OBJECTIVES: Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study was to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. DESIGN: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and one repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). RESULTS: The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably with those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. CONCLUSIONS: The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Psychophysical Detection Testing with Bayesian Active Learning.\n \n \n \n \n\n\n \n Gardner, J. R.; Song, X.; Weinberger, K. Q.; Barbour, D. L.; and Cunningham, J. P.\n\n\n \n\n\n\n In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, of UAI'15, pages 286–297, Amsterdam, Netherlands, July 2015. AUAI Press\n \n\n\n\n
\n\n\n\n \n \n \"PsychophysicalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{gardner_psychophysical_2015,\n\taddress = {Amsterdam, Netherlands},\n\tseries = {{UAI}'15},\n\ttitle = {Psychophysical {Detection} {Testing} with {Bayesian} {Active} {Learning}},\n\tisbn = {978-0-9966431-0-8},\n\turl = {https://dl.acm.org/doi/abs/10.5555/3020847.3020878},\n\tabstract = {Psychophysical detection tests are ubiquitous in the study of human sensation and the diagnosis and treatment of virtually all sensory impairments. In many of these settings, the goal is to recover, from a series of binary observations from a human subject, the latent function that describes the discriminability of a sensory stimulus over some relevant domain. The auditory detection test, for example, seeks to understand a subject's likelihood of hearing sounds as a function of frequency and amplitude. Conventional methods for performing these tests involve testing stimuli on a pre-determined grid. This approach not only samples at very uninformative locations, but also fails to learn critical features of a subject's latent discriminability function. Here we advance active learning with Gaussian processes to the setting of psychophysical testing. We develop a model that incorporates strong prior knowledge about the class of stimuli, we derive a sensible method for choosing sample points, and we demonstrate how to evaluate this model efficiently. Finally, we develop a novel likelihood that enables testing of multiple stimuli simultaneously. We evaluate our method in both simulated and real auditory detection tests, demonstrating the merit of our approach.},\n\turldate = {2020-11-10},\n\tbooktitle = {Proceedings of the {Thirty}-{First} {Conference} on {Uncertainty} in {Artificial} {Intelligence}},\n\tpublisher = {AUAI Press},\n\tauthor = {Gardner, Jacob R. and Song, Xinyu and Weinberger, Kilian Q. and {Barbour, D. L.} and Cunningham, John P.},\n\tmonth = jul,\n\tyear = {2015},\n\tpages = {286--297},\n}\n\n
\n
\n\n\n
\n Psychophysical detection tests are ubiquitous in the study of human sensation and the diagnosis and treatment of virtually all sensory impairments. In many of these settings, the goal is to recover, from a series of binary observations from a human subject, the latent function that describes the discriminability of a sensory stimulus over some relevant domain. The auditory detection test, for example, seeks to understand a subject's likelihood of hearing sounds as a function of frequency and amplitude. Conventional methods for performing these tests involve testing stimuli on a pre-determined grid. This approach not only samples at very uninformative locations, but also fails to learn critical features of a subject's latent discriminability function. Here we advance active learning with Gaussian processes to the setting of psychophysical testing. We develop a model that incorporates strong prior knowledge about the class of stimuli, we derive a sensible method for choosing sample points, and we demonstrate how to evaluate this model efficiently. Finally, we develop a novel likelihood that enables testing of multiple stimuli simultaneously. We evaluate our method in both simulated and real auditory detection tests, demonstrating the merit of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Active Model Selection with an Application to Automated Audiometry.\n \n \n \n \n\n\n \n Gardner, J.; Malkomes, G.; Garnett, R.; Weinberger, K. Q.; Barbour, D. L.; and Cunningham, J. P.\n\n\n \n\n\n\n Advances in Neural Information Processing Systems, 28: 2386–2394. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{gardner_bayesian_2015,\n\ttitle = {Bayesian {Active} {Model} {Selection} with an {Application} to {Automated} {Audiometry}},\n\tvolume = {28},\n\turl = {https://proceedings.neurips.cc/paper/2015/hash/d9731321ef4e063ebbee79298fa36f56-Abstract.html},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tjournal = {Advances in Neural Information Processing Systems},\n\tauthor = {Gardner, Jacob and Malkomes, Gustavo and Garnett, Roman and Weinberger, Kilian Q. and {Barbour, D. L.} and Cunningham, John P.},\n\tyear = {2015},\n\tpages = {2386--2394},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hughson-Westlake Audiogram.\n \n \n \n \n\n\n \n Bonauria\n\n\n \n\n\n\n August 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Hughson-WestlakePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{bonauria_hughson-westlake_2015,\n\ttitle = {Hughson-{Westlake} {Audiogram}},\n\turl = {https://www.youtube.com/watch?v=84s6v0JiFpk},\n\tabstract = {The standard modified Hughson-Westlake up-down threshold audiogram estimator in progress as conducted by an audiologist. Circles represent heard tones and X's represent unheard tones. This method focuses all its estimation power on a few frequencies and has no information about other frequencies. Note that the frequency offsets of the tones are for display only--all tones in a column are actually delivered at the same frequency.},\n\turldate = {2020-12-23},\n\tauthor = {{Bonauria}},\n\tmonth = aug,\n\tyear = {2015},\n}\n\n
\n
\n\n\n
\n The standard modified Hughson-Westlake up-down threshold audiogram estimator in progress as conducted by an audiologist. Circles represent heard tones and X's represent unheard tones. This method focuses all its estimation power on a few frequencies and has no information about other frequencies. Note that the frequency offsets of the tones are for display only–all tones in a column are actually delivered at the same frequency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bonauria Audiogram Demo.\n \n \n \n \n\n\n \n Bonauria\n\n\n \n\n\n\n June 2015.\n \n\n\n\n
\n\n\n\n \n \n \"BonauriaPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{bonauria_bonauria_2015,\n\ttitle = {Bonauria {Audiogram} {Demo}},\n\turl = {https://www.youtube.com/watch?v=b9olQbFzNBs},\n\tabstract = {First generation machine learning audiogram estimator as it estimates. The solid line represents the standard Hughson-Westlake threshold audiogram estimated by an audiologist. Circles represent heard tones and X's represent unheard tones. This method distributes estimation effort across all intensities and frequencies.},\n\turldate = {2020-12-23},\n\tauthor = {{Bonauria}},\n\tmonth = jun,\n\tyear = {2015},\n}\n
\n
\n\n\n
\n First generation machine learning audiogram estimator as it estimates. The solid line represents the standard Hughson-Westlake threshold audiogram estimated by an audiologist. Circles represent heard tones and X's represent unheard tones. This method distributes estimation effort across all intensities and frequencies.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Spike-Timing Computation Properties of a Feed-Forward Neural Network Model.\n \n \n \n\n\n \n Sinha, D. B.; Ledbetter, N. M.; and Barbour, D. L.\n\n\n \n\n\n\n Frontiers in Computational Neuroscience, 8: 5. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{sinha_spike-timing_2014,\n\ttitle = {Spike-{Timing} {Computation} {Properties} of a {Feed}-{Forward} {Neural} {Network} {Model}},\n\tvolume = {8},\n\tissn = {1662-5188},\n\tdoi = {10.3389/fncom.2014.00005},\n\tabstract = {Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.},\n\tlanguage = {eng},\n\tjournal = {Frontiers in Computational Neuroscience},\n\tauthor = {Sinha, Drew B. and Ledbetter, Noah M. and {Barbour, D. L.}},\n\tyear = {2014},\n\tpmid = {24478688},\n\tpmcid = {PMC3904091},\n\tkeywords = {biological neural networks, computational modeling, microcircuits, network connectivity, spike-timing dependent plasticity (STDP)},\n\tpages = {5},\n}\n\n
\n
\n\n\n
\n Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Modeling of Topology-Dependent Neural Network Plasticity Induced by Activity-Dependent Electrical Stimulation.\n \n \n \n \n\n\n \n Ni, R.; Ledbetter, N. M.; and Barbour, D. L.\n\n\n \n\n\n\n International IEEE/EMBS Conference on Neural Engineering: [Proceedings]. International IEEE EMBS Conference on Neural Engineering,831–834. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{ni_modeling_2013,\n\ttitle = {Modeling of {Topology}-{Dependent} {Neural} {Network} {Plasticity} {Induced} by {Activity}-{Dependent} {Electrical} {Stimulation}},\n\tissn = {1948-3546},\n\turl = {https://ieeexplore.ieee.org/document/6696063},\n\tdoi = {10.1109/NER.2013.6696063},\n\tabstract = {Activity-dependent electrical stimulation can induce cerebrocortical reorganization in vivo by activating brain areas using stimulation derived from the statistics of neural or muscular activity. Due to the nature of synaptic plasticity, network topology is likely to influence the effectiveness of this type of neuromodulation, yet its effect under different network topologies is unclear. To address this issue, we simulated small-scale three-neuron networks to explore topology-dependent network plasticity. The induced neuroplastic changes were evaluated by network coherence and unit-pair mutual information measures. We demonstrated that involvement of monosynaptic feedforward and reciprocal connections is more likely to lead to persistent decreased network coherence and increased network mutual information independent of the global network topology. On the contrary, disynaptic feedforward connections exhibit heterogeneous coherence and unit-pair mutual information sensitivity that depends strongly upon the network context.},\n\tlanguage = {eng},\n\tjournal = {International IEEE/EMBS Conference on Neural Engineering: [Proceedings]. International IEEE EMBS Conference on Neural Engineering},\n\tauthor = {Ni, Ruiye and Ledbetter, Noah M. and {Barbour, D. L.}},\n\tyear = {2013},\n\tpmid = {25123094},\n\tpmcid = {PMC4128279},\n\tpages = {831--834},\n}\n\n
\n
\n\n\n
\n Activity-dependent electrical stimulation can induce cerebrocortical reorganization in vivo by activating brain areas using stimulation derived from the statistics of neural or muscular activity. Due to the nature of synaptic plasticity, network topology is likely to influence the effectiveness of this type of neuromodulation, yet its effect under different network topologies is unclear. To address this issue, we simulated small-scale three-neuron networks to explore topology-dependent network plasticity. The induced neuroplastic changes were evaluated by network coherence and unit-pair mutual information measures. We demonstrated that involvement of monosynaptic feedforward and reciprocal connections is more likely to lead to persistent decreased network coherence and increased network mutual information independent of the global network topology. On the contrary, disynaptic feedforward connections exhibit heterogeneous coherence and unit-pair mutual information sensitivity that depends strongly upon the network context.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards a Speech BCI Using ECoG.\n \n \n \n \n\n\n \n Leuthardt, E. C.; and Cunningham, J.\n\n\n \n\n\n\n In Barbour, D. L.; Guger, C.; Allison, B. Z.; and Edlinger, G., editor(s), Brain-Computer Interface Research: A State-of-the-Art Summary, of SpringerBriefs in Electrical and Computer Engineering, pages 93–110. Springer, Berlin, Heidelberg, 2013.\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{leuthardt_towards_2013,\n\taddress = {Berlin, Heidelberg},\n\tseries = {{SpringerBriefs} in {Electrical} and {Computer} {Engineering}},\n\ttitle = {Towards a {Speech} {BCI} {Using} {ECoG}},\n\tisbn = {978-3-642-36083-1},\n\turl = {https://doi.org/10.1007/978-3-642-36083-1_10},\n\tabstract = {Electrocorticography (ECoG) has emerged as a new signal platform for brain–computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. More recently, speech networks have emerged as a new neurophysiological substrate that could be used to both further improve on or complement existing motor-based control paradigms as well as expand BCI techniques to new clinical populations. We review the emerging literature associated with the scientific, clinical, and technical findings that provide the motivation and capability for speech-based BCIs.},\n\tlanguage = {en},\n\turldate = {2020-11-11},\n\tbooktitle = {Brain-{Computer} {Interface} {Research}: {A} {State}-of-the-{Art} {Summary}},\n\tpublisher = {Springer},\n\tauthor = {Leuthardt, Eric C. and Cunningham, John},\n\teditor = {{Barbour, D. L.} and Guger, Christoph and Allison, Brendan Z. and Edlinger, Günter},\n\tyear = {2013},\n\tdoi = {10.1007/978-3-642-36083-1_10},\n\tkeywords = {Cortex, Electrocorticography, Gamma rhythms, Human, Phoneme, Speech},\n\tpages = {93--110},\n}\n\n
\n
\n\n\n
\n Electrocorticography (ECoG) has emerged as a new signal platform for brain–computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. More recently, speech networks have emerged as a new neurophysiological substrate that could be used to both further improve on or complement existing motor-based control paradigms as well as expand BCI techniques to new clinical populations. We review the emerging literature associated with the scientific, clinical, and technical findings that provide the motivation and capability for speech-based BCIs.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Temporal Evolution of Gamma Activity in Human Cortex During an Overt and Covert Word Repetition Task.\n \n \n \n\n\n \n Leuthardt, E. C.; Pei, X.; Breshears, J.; Gaona, C.; Sharma, M.; Freudenberg, Z.; Barbour, D. L.; and Schalk, G.\n\n\n \n\n\n\n Frontiers in Human Neuroscience, 6: 99. 2012.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{leuthardt_temporal_2012,\n\ttitle = {Temporal {Evolution} of {Gamma} {Activity} in {Human} {Cortex} {During} an {Overt} and {Covert} {Word} {Repetition} {Task}},\n\tvolume = {6},\n\tissn = {1662-5161},\n\tdoi = {10.3389/fnhum.2012.00099},\n\tabstract = {Several scientists have proposed different models for cortical processing of speech. Classically, the regions participating in language were thought to be modular with a linear sequence of activations. More recently, modern theoretical models have posited a more hierarchical and distributed interaction of anatomic areas for the various stages of speech processing. Traditional imaging techniques can only define the location or time of cortical activation, which impedes the further evaluation and refinement of these models. In this study, we take advantage of recordings from the surface of the brain [electrocorticography (ECoG)], which can accurately detect the location and timing of cortical activations, to study the time course of ECoG high gamma (HG) modulations during an overt and covert word repetition task for different cortical areas. For overt word production, our results show substantial perisylvian cortical activations early in the perceptual phase of the task that were maintained through word articulation. However, this broad activation is attenuated during the expressive phase of covert word repetition. Across the different repetition tasks, the utilization of the different cortical sites within the perisylvian region varied in the degree of activation dependent on which stimulus was provided (auditory or visual cue) and whether the word was to be spoken or imagined. Taken together, the data support current models of speech that have been previously described with functional imaging. Moreover, this study demonstrates that the broad perisylvian speech network activates early and maintains suprathreshold activation throughout the word repetition task that appears to be modulated by the demands of different conditions.},\n\tlanguage = {eng},\n\tjournal = {Frontiers in Human Neuroscience},\n\tauthor = {Leuthardt, Eric C. and Pei, Xiao-Mei and Breshears, Jonathan and Gaona, Charles and Sharma, Mohit and Freudenberg, Zac and {Barbour, D. L.} and Schalk, Gerwin},\n\tyear = {2012},\n\tpmid = {22563311},\n\tpmcid = {PMC3342676},\n\tkeywords = {cortex, electrocorticography, gamma rhythms, human, speech},\n\tpages = {99},\n}\n\n
\n
\n\n\n
\n Several scientists have proposed different models for cortical processing of speech. Classically, the regions participating in language were thought to be modular with a linear sequence of activations. More recently, modern theoretical models have posited a more hierarchical and distributed interaction of anatomic areas for the various stages of speech processing. Traditional imaging techniques can only define the location or time of cortical activation, which impedes the further evaluation and refinement of these models. In this study, we take advantage of recordings from the surface of the brain [electrocorticography (ECoG)], which can accurately detect the location and timing of cortical activations, to study the time course of ECoG high gamma (HG) modulations during an overt and covert word repetition task for different cortical areas. For overt word production, our results show substantial perisylvian cortical activations early in the perceptual phase of the task that were maintained through word articulation. However, this broad activation is attenuated during the expressive phase of covert word repetition. Across the different repetition tasks, the utilization of the different cortical sites within the perisylvian region varied in the degree of activation dependent on which stimulus was provided (auditory or visual cue) and whether the word was to be spoken or imagined. Taken together, the data support current models of speech that have been previously described with functional imaging. Moreover, this study demonstrates that the broad perisylvian speech network activates early and maintains suprathreshold activation throughout the word repetition task that appears to be modulated by the demands of different conditions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2011\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Nonuniform High-Gamma (60-500 Hz) Power Changes Dissociate Cognitive Task and Anatomy in Human Cortex.\n \n \n \n\n\n \n Gaona, C. M.; Sharma, M.; Freudenburg, Z. V.; Breshears, J. D.; Bundy, D. T.; Roland, J.; Barbour, D. L.; Schalk, G.; and Leuthardt, E. C.\n\n\n \n\n\n\n The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(6): 2091–2100. February 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{gaona_nonuniform_2011,\n\ttitle = {Nonuniform {High}-{Gamma} (60-500 {Hz}) {Power} {Changes} {Dissociate} {Cognitive} {Task} and {Anatomy} in {Human} {Cortex}},\n\tvolume = {31},\n\tissn = {1529-2401},\n\tdoi = {10.1523/JNEUROSCI.4722-10.2011},\n\tabstract = {High-gamma-band ({\\textgreater}60 Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. Despite discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma-band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60 Hz. Based on nonuniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma-band (60-500 Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Broca's area, and superior temporal gyrus), these behavior- and location-dependent power changes evidenced nonuniform trends across the population. Together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma-band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics.},\n\tlanguage = {eng},\n\tnumber = {6},\n\tjournal = {The Journal of Neuroscience: The Official Journal of the Society for Neuroscience},\n\tauthor = {Gaona, Charles M. and Sharma, Mohit and Freudenburg, Zachary V. and Breshears, Jonathan D. and Bundy, David T. and Roland, Jarod and {Barbour, D. L.} and Schalk, Gerwin and Leuthardt, Eric C.},\n\tmonth = feb,\n\tyear = {2011},\n\tpmid = {21307246},\n\tpmcid = {PMC3737077},\n\tkeywords = {Acoustic Stimulation, Adolescent, Adult, Analysis of Variance, Brain Mapping, Brain Waves, Cerebral Cortex, Cognition Disorders, Electroencephalography, Epilepsy, Evoked Potentials, Female, Humans, Male, Middle Aged, Neuropsychological Tests, Nonlinear Dynamics, Photic Stimulation, Reaction Time, Spectrum Analysis, Time Factors, Vocabulary},\n\tpages = {2091--2100},\n}\n\n
\n
\n\n\n
\n High-gamma-band (\\textgreater60 Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. Despite discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma-band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60 Hz. Based on nonuniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma-band (60-500 Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Broca's area, and superior temporal gyrus), these behavior- and location-dependent power changes evidenced nonuniform trends across the population. Together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma-band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Rate-Level Responses in Awake Marmoset Auditory Cortex.\n \n \n \n\n\n \n Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Hearing Research, 275(1-2): 30–42. May 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{watkins_rate-level_2011,\n\ttitle = {Rate-{Level} {Responses} in {Awake} {Marmoset} {Auditory} {Cortex}},\n\tvolume = {275},\n\tissn = {1878-5891},\n\tdoi = {10.1016/j.heares.2010.11.011},\n\tabstract = {Investigations of auditory neuronal firing rate as a function of sound level have revealed a wide variety of rate-level function shapes, including neurons with nonmonotonic or level-tuned functions. These neurons have an unclear role in auditory processing but have been found to be quite common. In the present study of awake marmoset primary auditory cortex (A1) neurons, 56\\% (305 out of 544), when stimulated with tones at the highest sound level tested, exhibited a decrement in driven rate of at least 50\\% from the maximum. These nonmonotonic neurons demonstrated significantly lower response thresholds than monotonic neurons, although both populations exhibited thresholds skewed toward lower values. Nonmonotonic neurons significantly outnumbered monotonic neurons in the frequency range 6-13 kHz, which is the frequency range containing most marmoset vocalization energy. Spontaneous rate was inversely correlated with threshold in both populations, and spontaneous rates of nonmonotonic neurons had significantly lower values than spontaneous rates of monotonic neurons, although distributions of maximum driven rates were not significantly different. Finally, monotonicity was found to be organized within electrode penetrations like characteristic frequency but with less structure. These findings are consistent with the hypothesis that nonmonotonic neurons play a unique role in representing sound level, particularly at the lowest sound levels and for complex vocalizations.},\n\tlanguage = {eng},\n\tnumber = {1-2},\n\tjournal = {Hearing Research},\n\tauthor = {Watkins, Paul V. and {Barbour, D. L.}},\n\tmonth = may,\n\tyear = {2011},\n\tpmid = {21145961},\n\tpmcid = {PMC3095711},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Animals, Auditory Cortex, Auditory Threshold, Callithrix, Electrodes, Electrophysiology, Evoked Potentials, Auditory, Models, Biological, Neurons, Normal Distribution, Wakefulness},\n\tpages = {30--42},\n}\n\n
\n
\n\n\n
\n Investigations of auditory neuronal firing rate as a function of sound level have revealed a wide variety of rate-level function shapes, including neurons with nonmonotonic or level-tuned functions. These neurons have an unclear role in auditory processing but have been found to be quite common. In the present study of awake marmoset primary auditory cortex (A1) neurons, 56% (305 out of 544), when stimulated with tones at the highest sound level tested, exhibited a decrement in driven rate of at least 50% from the maximum. These nonmonotonic neurons demonstrated significantly lower response thresholds than monotonic neurons, although both populations exhibited thresholds skewed toward lower values. Nonmonotonic neurons significantly outnumbered monotonic neurons in the frequency range 6-13 kHz, which is the frequency range containing most marmoset vocalization energy. Spontaneous rate was inversely correlated with threshold in both populations, and spontaneous rates of nonmonotonic neurons had significantly lower values than spontaneous rates of monotonic neurons, although distributions of maximum driven rates were not significantly different. Finally, monotonicity was found to be organized within electrode penetrations like characteristic frequency but with less structure. These findings are consistent with the hypothesis that nonmonotonic neurons play a unique role in representing sound level, particularly at the lowest sound levels and for complex vocalizations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Level-Tuned Neurons in Primary Auditory Cortex Adapt Differently to Loud Versus Soft Sounds.\n \n \n \n\n\n \n Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Cerebral Cortex (New York, N.Y.: 1991), 21(1): 178–190. January 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{watkins_level-tuned_2011,\n\ttitle = {Level-{Tuned} {Neurons} in {Primary} {Auditory} {Cortex} {Adapt} {Differently} to {Loud} {Versus} {Soft} {Sounds}},\n\tvolume = {21},\n\tissn = {1460-2199},\n\tdoi = {10.1093/cercor/bhq079},\n\tabstract = {The responses of auditory neurons tuned to stimulus intensity (i.e., nonmonotonic rate-level responders) have typically been analyzed with stimulus paradigms that eliminate neuronal adaptation to recent stimulus statistics. This procedure is usually accomplished by presenting individual sounds with long silent periods between them. Studies using such paradigms have led to hypotheses that nonmonotonic neurons may play a role in amplitude spectrum coding or level-invariant representations of complex spectral shapes. We have previously proposed an alternate hypothesis that level-tuned neurons may represent specialized coders of low sound levels because they preserve their sensitivity to low levels even when average sound level is relatively high. Here we demonstrate that nonmonotonic neurons in awake marmoset primary auditory cortex accomplish this feat by adapting their upper dynamic range to encode sounds with high mean level, leaving the lower dynamic range available for encoding relatively rare low-level sounds. This adaptive behavior manifests in nonmonotonic relative to monotonic neurons as 1) a lesser amount of overall shifting of rate-level response thresholds and (2) a nonmonotonic gain adjustment with increasing mean stimulus level.},\n\tlanguage = {eng},\n\tnumber = {1},\n\tjournal = {Cerebral Cortex (New York, N.Y.: 1991)},\n\tauthor = {Watkins, Paul V. and {Barbour, D. L.}},\n\tmonth = jan,\n\tyear = {2011},\n\tpmid = {20457692},\n\tpmcid = {PMC3000570},\n\tkeywords = {Action Potentials, Adaptation, Physiological, Animals, Auditory Cortex, Auditory Threshold, Callithrix, Loudness Perception, Neurons, Species Specificity},\n\tpages = {178--190},\n}\n\n
\n
\n\n\n
\n The responses of auditory neurons tuned to stimulus intensity (i.e., nonmonotonic rate-level responders) have typically been analyzed with stimulus paradigms that eliminate neuronal adaptation to recent stimulus statistics. This procedure is usually accomplished by presenting individual sounds with long silent periods between them. Studies using such paradigms have led to hypotheses that nonmonotonic neurons may play a role in amplitude spectrum coding or level-invariant representations of complex spectral shapes. We have previously proposed an alternate hypothesis that level-tuned neurons may represent specialized coders of low sound levels because they preserve their sensitivity to low levels even when average sound level is relatively high. Here we demonstrate that nonmonotonic neurons in awake marmoset primary auditory cortex accomplish this feat by adapting their upper dynamic range to encode sounds with high mean level, leaving the lower dynamic range available for encoding relatively rare low-level sounds. This adaptive behavior manifests in nonmonotonic relative to monotonic neurons as 1) a lesser amount of overall shifting of rate-level response thresholds and (2) a nonmonotonic gain adjustment with increasing mean stimulus level.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Evaluation of Techniques Used to Estimate Cortical Feature Maps.\n \n \n \n\n\n \n Katta, N.; Chen, T. L.; Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Neuroscience Methods, 202(1): 87–98. October 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{katta_evaluation_2011,\n\ttitle = {Evaluation of {Techniques} {Used} to {Estimate} {Cortical} {Feature} {Maps}},\n\tvolume = {202},\n\tissn = {1872-678X},\n\tdoi = {10.1016/j.jneumeth.2011.08.032},\n\tabstract = {Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected.},\n\tlanguage = {eng},\n\tnumber = {1},\n\tjournal = {Journal of Neuroscience Methods},\n\tauthor = {Katta, Nalin and Chen, Thomas L. and Watkins, Paul V. and {Barbour, D. L.}},\n\tmonth = oct,\n\tyear = {2011},\n\tpmid = {21889537},\n\tpmcid = {PMC3192494},\n\tkeywords = {Algorithms, Auditory Cortex, Brain Mapping, Models, Neurological, Neurons},\n\tpages = {87--98},\n}\n\n
\n
\n\n\n
\n Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Intensity-Invariant Coding in the Auditory System.\n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n Neuroscience and Biobehavioral Reviews, 35(10): 2064–2072. November 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_intensity-invariant_2011,\n\ttitle = {Intensity-{Invariant} {Coding} in the {Auditory} {System}},\n\tvolume = {35},\n\tissn = {1873-7528},\n\tdoi = {10.1016/j.neubiorev.2011.04.009},\n\tabstract = {The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding.},\n\tlanguage = {eng},\n\tnumber = {10},\n\tjournal = {Neuroscience and Biobehavioral Reviews},\n\tauthor = {{Barbour, D. L.}},\n\tmonth = nov,\n\tyear = {2011},\n\tpmid = {21540053},\n\tpmcid = {PMC3165138},\n\tkeywords = {Adaptation, Physiological, Animals, Auditory Pathways, Auditory Perception, Brain Mapping, Humans, Neurons},\n\tpages = {2064--2072},\n}\n\n
\n
\n\n\n
\n The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Photoacoustic Microscopy of Microvascular Responses to Cortical Electrical Stimulation.\n \n \n \n \n\n\n \n Tsytsarev, V.; Hu, S.; Yao, J.; Maslov, K.; Barbour, D. L.; and Wang, L. V.\n\n\n \n\n\n\n Journal of Biomedical Optics, 16(7): 076002. July 2011.\n \n\n\n\n
\n\n\n\n \n \n \"PhotoacousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{tsytsarev_photoacoustic_2011,\n\ttitle = {Photoacoustic {Microscopy} of {Microvascular} {Responses} to {Cortical} {Electrical} {Stimulation}},\n\tvolume = {16},\n\tissn = {1560-2281},\n\turl = {https://www.spiedigitallibrary.org/journals/journal-of-biomedical-optics/volume-16/issue-07/076002/Photoacoustic-microscopy-of-microvascular-responses-to-cortical-electrical-stimulation/10.1117/1.3594785.full?SSO=1},\n\tdoi = {10.1117/1.3594785},\n\tabstract = {Advances in the functional imaging of cortical hemodynamics have greatly facilitated the understanding of neurovascular coupling. In this study, label-free optical-resolution photoacoustic microscopy (OR-PAM) was used to monitor microvascular responses to direct electrical stimulations of the mouse somatosensory cortex through a cranial opening. The responses appeared in two forms: vasoconstriction and vasodilatation. The transition between these two forms of response was observed in single vessels by varying the stimulation intensity. Marked correlation was found between the current-dependent responses of two daughter vessels bifurcating from the same parent vessel. Statistical analysis of twenty-seven vessels from three different animals further characterized the spatial-temporal features and the current dependence of the microvascular response. Our results demonstrate that OR-PAM is a valuable tool to study neurovascular coupling at the microscopic level.},\n\tlanguage = {eng},\n\tnumber = {7},\n\tjournal = {Journal of Biomedical Optics},\n\tauthor = {Tsytsarev, Vassiliy and Hu, Song and Yao, Junjie and Maslov, Konstantin and {Barbour, D. L.} and Wang, Lihong V.},\n\tmonth = jul,\n\tyear = {2011},\n\tpmid = {21806263},\n\tpmcid = {PMC3144972},\n\tkeywords = {Animals, Cerebrovascular Circulation, Electric Stimulation, Lasers, Solid-State, Mice, Microscopy, Acoustic, Microvessels, Optical Phenomena, Somatosensory Cortex, Vasoconstriction, Vasodilation},\n\tpages = {076002},\n}\n\n
\n
\n\n\n
\n Advances in the functional imaging of cortical hemodynamics have greatly facilitated the understanding of neurovascular coupling. In this study, label-free optical-resolution photoacoustic microscopy (OR-PAM) was used to monitor microvascular responses to direct electrical stimulations of the mouse somatosensory cortex through a cranial opening. The responses appeared in two forms: vasoconstriction and vasodilatation. The transition between these two forms of response was observed in single vessels by varying the stimulation intensity. Marked correlation was found between the current-dependent responses of two daughter vessels bifurcating from the same parent vessel. Statistical analysis of twenty-seven vessels from three different animals further characterized the spatial-temporal features and the current dependence of the microvascular response. Our results demonstrate that OR-PAM is a valuable tool to study neurovascular coupling at the microscopic level.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans.\n \n \n \n\n\n \n Pei, X.; Barbour, D. L.; Leuthardt, E. C.; and Schalk, G.\n\n\n \n\n\n\n Journal of Neural Engineering, 8(4): 046028. August 2011.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{pei_decoding_2011,\n\ttitle = {Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans},\n\tvolume = {8},\n\tissn = {1741-2552},\n\tdoi = {10.1088/1741-2560/8/4/046028},\n\tabstract = {Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.},\n\tlanguage = {eng},\n\tnumber = {4},\n\tjournal = {Journal of Neural Engineering},\n\tauthor = {Pei, Xiaomei and {Barbour, D. L.} and Leuthardt, Eric C. and Schalk, Gerwin},\n\tmonth = aug,\n\tyear = {2011},\n\tpmid = {21750369},\n\tpmcid = {PMC3772685},\n\tkeywords = {Adolescent, Adult, Brain, Brain Mapping, Cerebral Cortex, Communication Aids for Disabled, Data Interpretation, Statistical, Discrimination, Psychological, Electrodes, Implanted, Electroencephalography, Epilepsy, Female, Functional Laterality, Humans, Male, Middle Aged, Movement, Speech Perception, User-Computer Interface},\n\tpages = {046028},\n}\n\n
\n
\n\n\n
\n Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2010\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Designing In Vivo Concentration Gradients with Discrete Controlled Release: A Computational Model.\n \n \n \n\n\n \n Walker, E. Y.; and Barbour, D. L.\n\n\n \n\n\n\n Journal of Neural Engineering, 7(4): 046013. August 2010.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{walker_designing_2010,\n\ttitle = {Designing {In} {Vivo} {Concentration} {Gradients} with {Discrete} {Controlled} {Release}: {A} {Computational} {Model}},\n\tvolume = {7},\n\tissn = {1741-2552},\n\tshorttitle = {Designing in vivo concentration gradients with discrete controlled release},\n\tdoi = {10.1088/1741-2560/7/4/046013},\n\tabstract = {One promising neurorehabilitation therapy involves presenting neurotrophins directly into the brain to induce growth of new neural connections. The precise control of neurotrophin concentration gradients deep within neural tissue that would be necessary for such a therapy is not currently possible, however. Here we evaluate the theoretical potential of a novel method of drug delivery, discrete controlled release (DCR), to control effective neurotrophin concentration gradients in an isotropic region of neocortex. We do so by constructing computational models of neurotrophin concentration profiles resulting from discrete release locations into the cortex and then optimizing their design for uniform concentration gradients. The resulting model indicates that by rationally selecting initial neurotrophin concentrations for drug-releasing electrode coatings in a square 16-electrode array, nearly uniform concentration gradients (i.e. planar concentration profiles) from one edge of the electrode array to the other should be obtainable. DCR therefore represents a promising new method of precisely directing neuronal growth in vivo over a wider spatial profile than would be possible with single release points.},\n\tlanguage = {eng},\n\tnumber = {4},\n\tjournal = {Journal of Neural Engineering},\n\tauthor = {Walker, Edgar Y. and {Barbour, D. L.}},\n\tmonth = aug,\n\tyear = {2010},\n\tpmid = {20644248},\n\tpmcid = {PMC2922513},\n\tkeywords = {Animals, Brain, Brain Chemistry, Computer Simulation, Delayed-Action Preparations, Drug Compounding, Humans, Models, Chemical, Nerve Growth Factors},\n\tpages = {046013},\n}\n\n
\n
\n\n\n
\n One promising neurorehabilitation therapy involves presenting neurotrophins directly into the brain to induce growth of new neural connections. The precise control of neurotrophin concentration gradients deep within neural tissue that would be necessary for such a therapy is not currently possible, however. Here we evaluate the theoretical potential of a novel method of drug delivery, discrete controlled release (DCR), to control effective neurotrophin concentration gradients in an isotropic region of neocortex. We do so by constructing computational models of neurotrophin concentration profiles resulting from discrete release locations into the cortex and then optimizing their design for uniform concentration gradients. The resulting model indicates that by rationally selecting initial neurotrophin concentrations for drug-releasing electrode coatings in a square 16-electrode array, nearly uniform concentration gradients (i.e. planar concentration profiles) from one edge of the electrode array to the other should be obtainable. DCR therefore represents a promising new method of precisely directing neuronal growth in vivo over a wider spatial profile than would be possible with single release points.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex.\n \n \n \n\n\n \n Chen, T. L.; Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Brain Research, 1319: 175–189. March 2010.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{chen_theoretical_2010,\n\ttitle = {Theoretical {Limitations} on {Functional} {Imaging} {Resolution} in {Auditory} {Cortex}},\n\tvolume = {1319},\n\tissn = {1872-6240},\n\tdoi = {10.1016/j.brainres.2010.01.012},\n\tabstract = {Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments.},\n\tlanguage = {eng},\n\tjournal = {Brain Research},\n\tauthor = {Chen, Thomas L. and Watkins, Paul V. and {Barbour, D. L.}},\n\tmonth = mar,\n\tyear = {2010},\n\tpmid = {20079343},\n\tpmcid = {PMC2832293},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Algorithms, Animals, Auditory Cortex, Auditory Perception, Brain Mapping, Cats, Cochlea, Diagnostic Imaging, Humans, Models, Neurological, Neurons},\n\tpages = {175--189},\n}\n\n
\n
\n\n\n
\n Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2009\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n A Computational Framework for Topographies of Cortical Areas.\n \n \n \n\n\n \n Watkins, P. V.; Chen, T. L.; and Barbour, D. L.\n\n\n \n\n\n\n Biological Cybernetics, 100(3): 231–248. March 2009.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{watkins_computational_2009,\n\ttitle = {A {Computational} {Framework} for {Topographies} of {Cortical} {Areas}},\n\tvolume = {100},\n\tissn = {1432-0770},\n\tdoi = {10.1007/s00422-009-0294-9},\n\tabstract = {Self-organizing feature maps (SOFMs) represent a dimensionality-reduction algorithm that has been used to replicate feature topographies observed experimentally in primary visual cortex (V1). We used the SOFM algorithm to model possible topographies of generic sensory cortical areas containing up to five arbitrary physiological features. This study explored the conditions under which these multi-feature SOFMs contained two features that were mapped monotonically and aligned orthogonally with one another (i.e., "globally orthogonal"), as well as the conditions under which the map of one feature aligned with the longest anatomical dimension of the modeled cortical area (i.e., "dominant"). In a single SOFM with more than two features, we never observed more than one dominant feature, nor did we observe two globally orthogonal features in the same map in which a dominant feature occurred. Whether dominance or global orthogonality occurred depended upon how heavily weighted the features were relative to one another. The most heavily weighted features are likely to correspond to those physical stimulus properties transduced directly by the sensory epithelium of a particular sensory modality. Our results imply, therefore, that in the primary cortical area of sensory modalities with a two-dimensional sensory epithelium, these two features are likely to be organized globally orthogonally to one another, and neither feature is likely to be dominant. In the primary cortical area of sensory modalities with a one-dimensional sensory epithelium, however, this feature is likely to be dominant, and no two features are likely to be organized globally orthogonally to one another. Because the auditory system transduces a single stimulus feature (i.e., frequency) along the entire length of the cochlea, these findings may have particular relevance for topographic maps of primary auditory cortex.},\n\tlanguage = {eng},\n\tnumber = {3},\n\tjournal = {Biological Cybernetics},\n\tauthor = {Watkins, Paul V. and Chen, Thomas L. and {Barbour, D. L.}},\n\tmonth = mar,\n\tyear = {2009},\n\tpmid = {19221784},\n\tkeywords = {Algorithms, Visual Cortex},\n\tpages = {231--248},\n}\n\n
\n
\n\n\n
\n Self-organizing feature maps (SOFMs) represent a dimensionality-reduction algorithm that has been used to replicate feature topographies observed experimentally in primary visual cortex (V1). We used the SOFM algorithm to model possible topographies of generic sensory cortical areas containing up to five arbitrary physiological features. This study explored the conditions under which these multi-feature SOFMs contained two features that were mapped monotonically and aligned orthogonally with one another (i.e., \"globally orthogonal\"), as well as the conditions under which the map of one feature aligned with the longest anatomical dimension of the modeled cortical area (i.e., \"dominant\"). In a single SOFM with more than two features, we never observed more than one dominant feature, nor did we observe two globally orthogonal features in the same map in which a dominant feature occurred. Whether dominance or global orthogonality occurred depended upon how heavily weighted the features were relative to one another. The most heavily weighted features are likely to correspond to those physical stimulus properties transduced directly by the sensory epithelium of a particular sensory modality. Our results imply, therefore, that in the primary cortical area of sensory modalities with a two-dimensional sensory epithelium, these two features are likely to be organized globally orthogonally to one another, and neither feature is likely to be dominant. In the primary cortical area of sensory modalities with a one-dimensional sensory epithelium, however, this feature is likely to be dominant, and no two features are likely to be organized globally orthogonally to one another. Because the auditory system transduces a single stimulus feature (i.e., frequency) along the entire length of the cochlea, these findings may have particular relevance for topographic maps of primary auditory cortex.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Structures and Circuits: Auditory Cortex,.\n \n \n \n \n\n\n \n Barbour, D. L.\n\n\n \n\n\n\n In Encycopedia of Neuroscience, pages 701–707. Academic Press, Oxford, 1st edition, May 2009.\n \n\n\n\n
\n\n\n\n \n \n \"StructuresPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{barbour_d_l_structures_2009,\n\taddress = {Oxford},\n\tedition = {1st},\n\ttitle = {Structures and {Circuits}: {Auditory} {Cortex},},\n\tisbn = {978-0-08-044617-2},\n\turl = {https://www.elsevier.com/books/encyclopedia-of-neuroscience/squire/978-0-08-044617-2},\n\tbooktitle = {Encycopedia of {Neuroscience}},\n\tpublisher = {Academic Press},\n\tauthor = {{Barbour, D. L.}},\n\tmonth = may,\n\tyear = {2009},\n\tpages = {701--707},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2008\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Excitatory Local Connections of Superficial Neurons in Rat Auditory Cortex.\n \n \n \n\n\n \n Barbour, D. L.; and Callaway, E. M.\n\n\n \n\n\n\n The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 28(44): 11174–11185. October 2008.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_excitatory_2008,\n\ttitle = {Excitatory {Local} {Connections} of {Superficial} {Neurons} in {Rat} {Auditory} {Cortex}},\n\tvolume = {28},\n\tissn = {1529-2401},\n\tdoi = {10.1523/JNEUROSCI.2093-08.2008},\n\tabstract = {The mammalian cerebral cortex consists of multiple areas specialized for processing information for many different sensory modalities. Although the basic structure is similar for each cortical area, specialized neural connections likely mediate unique information processing requirements. Relative to primary visual (V1) and somatosensory (S1) cortices, little is known about the intrinsic connectivity of primary auditory cortex (A1). To better understand the flow of information from the thalamus to and through rat A1, we made use of a rapid, high-throughput screening method exploiting laser-induced uncaging of glutamate to construct excitatory input maps of individual neurons. We found that excitatory inputs to layer 2/3 pyramidal neurons were similar to those in V1 and S1; these cells received strong excitation primarily from layers 2-4. Both anatomical and physiological observations, however, indicate that inputs and outputs of layer 4 excitatory neurons in A1 contrast with those in V1 and S1. Layer 2/3 pyramids in A1 have substantial axonal arbors in layer 4, and photostimulation demonstrates that these pyramids can connect to layer 4 excitatory neurons. Furthermore, most or all of these layer 4 excitatory neurons project out of the local cortical circuit. Unlike S1 and V1, where feedback to layer 4 is mediated exclusively by indirect local circuits involving layer 2/3 projections to deep layers and deep feedback to layer 4, layer 4 of A1 integrates thalamic and strong layer 4 recurrent excitatory input with relatively direct feedback from layer 2/3 and provides direct cortical output.},\n\tlanguage = {eng},\n\tnumber = {44},\n\tjournal = {The Journal of Neuroscience: The Official Journal of the Society for Neuroscience},\n\tauthor = {{Barbour, D. L.} and Callaway, Edward M.},\n\tmonth = oct,\n\tyear = {2008},\n\tpmid = {18971460},\n\tpmcid = {PMC2610470},\n\tkeywords = {Animals, Auditory Cortex, Excitatory Postsynaptic Potentials, Neurons, Photic Stimulation, Pyramidal Cells, Rats, Rats, Long-Evans},\n\tpages = {11174--11185},\n}\n\n
\n
\n\n\n
\n The mammalian cerebral cortex consists of multiple areas specialized for processing information for many different sensory modalities. Although the basic structure is similar for each cortical area, specialized neural connections likely mediate unique information processing requirements. Relative to primary visual (V1) and somatosensory (S1) cortices, little is known about the intrinsic connectivity of primary auditory cortex (A1). To better understand the flow of information from the thalamus to and through rat A1, we made use of a rapid, high-throughput screening method exploiting laser-induced uncaging of glutamate to construct excitatory input maps of individual neurons. We found that excitatory inputs to layer 2/3 pyramidal neurons were similar to those in V1 and S1; these cells received strong excitation primarily from layers 2-4. Both anatomical and physiological observations, however, indicate that inputs and outputs of layer 4 excitatory neurons in A1 contrast with those in V1 and S1. Layer 2/3 pyramids in A1 have substantial axonal arbors in layer 4, and photostimulation demonstrates that these pyramids can connect to layer 4 excitatory neurons. Furthermore, most or all of these layer 4 excitatory neurons project out of the local cortical circuit. Unlike S1 and V1, where feedback to layer 4 is mediated exclusively by indirect local circuits involving layer 2/3 projections to deep layers and deep feedback to layer 4, layer 4 of A1 integrates thalamic and strong layer 4 recurrent excitatory input with relatively direct feedback from layer 2/3 and provides direct cortical output.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Specialized Neuronal Adaptation for Preserving Input Sensitivity.\n \n \n \n\n\n \n Watkins, P. V.; and Barbour, D. L.\n\n\n \n\n\n\n Nature Neuroscience, 11(11): 1259–1261. November 2008.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{watkins_specialized_2008,\n\ttitle = {Specialized {Neuronal} {Adaptation} for {Preserving} {Input} {Sensitivity}},\n\tvolume = {11},\n\tissn = {1546-1726},\n\tdoi = {10.1038/nn.2201},\n\tabstract = {Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.},\n\tlanguage = {eng},\n\tnumber = {11},\n\tjournal = {Nature Neuroscience},\n\tauthor = {Watkins, Paul V. and {Barbour, D. L.}},\n\tmonth = nov,\n\tyear = {2008},\n\tpmid = {18820690},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Adaptation, Physiological, Animals, Auditory Cortex, Auditory Threshold, Callithrix, Patch-Clamp Techniques, Probability, Psychoacoustics, Sensory Receptor Cells, Wakefulness},\n\tpages = {1259--1261},\n}\n\n
\n
\n\n\n
\n Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2005\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n AM and FM Coherence Sensitivity in the Auditory Cortex as a Potential Neural Mechanism for Sound Segregation.\n \n \n \n\n\n \n Barbour, D. L.; and Wang, X.\n\n\n \n\n\n\n Auditory Signal Processing,274–281. 2005.\n Publisher: Springer\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{barbour_d_l_am_2005,\n\ttitle = {{AM} and {FM} {Coherence} {Sensitivity} in the {Auditory} {Cortex} as a {Potential} {Neural} {Mechanism} for {Sound} {Segregation}},\n\tjournal = {Auditory Signal Processing},\n\tauthor = {{Barbour, D. L.} and Wang, Xiaoqin},\n\tyear = {2005},\n\tnote = {Publisher: Springer},\n\tpages = {274--281},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2003\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Auditory Cortical Responses Elicited in Awake Primates by Random Spectrum Stimuli.\n \n \n \n\n\n \n Barbour, D. L.; and Wang, X.\n\n\n \n\n\n\n The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(18): 7194–7206. August 2003.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_auditory_2003,\n\ttitle = {Auditory {Cortical} {Responses} {Elicited} in {Awake} {Primates} by {Random} {Spectrum} {Stimuli}},\n\tvolume = {23},\n\tissn = {1529-2401},\n\tabstract = {Contrary to findings in subcortical auditory nuclei, auditory cortex neurons have traditionally been described as spiking only at the onsets of simple sounds such as pure tones or bandpass noise and to acoustic transients in complex sounds. Furthermore, primary auditory cortex (A1) has traditionally been described as mostly tone responsive and the lateral belt area of primates as mostly noise responsive. The present study was designed to unify the study of these two cortical areas using random spectrum stimuli (RSS), a new class of parametric, wideband, stationary acoustic stimuli. We found that 60\\% of all neurons encountered in A1 and the lateral belt of awake marmoset monkeys (Callithrix jacchus) showed significant changes in firing rates in response to RSS. Of these, 89\\% showed sustained spiking in response to one or more individual RSS, a substantially greater percentage than would be expected from traditional studies, indicating that RSS are well suited for studying these two cortical areas. When firing rates elicited by RSS were used to construct linear estimates of frequency tuning for these sustained responders, the shape of the estimate function remained relatively constant throughout the stimulus interval and across the stimulus properties of mean sound level, spectral density, and spectral contrast. This finding indicates that frequency tuning computed from RSS reflects a robust estimate of the actual tuning of a neuron. Use of this estimate to predict rate responses to other RSS, however, yielded poor results, implying that auditory cortex neurons integrate information across frequency nonlinearly. No systematic difference in prediction quality between A1 and the lateral belt could be detected.},\n\tlanguage = {eng},\n\tnumber = {18},\n\tjournal = {The Journal of Neuroscience: The Official Journal of the Society for Neuroscience},\n\tauthor = {{Barbour, D. L.} and Wang, Xiaoqin},\n\tmonth = aug,\n\tyear = {2003},\n\tpmid = {12904480},\n\tpmcid = {PMC1945239},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Animals, Auditory Cortex, Auditory Pathways, Callithrix, Linear Models, Models, Neurological, Neurons, Sound Spectrography, Synaptic Transmission, Wakefulness},\n\tpages = {7194--7206},\n}\n\n
\n
\n\n\n
\n Contrary to findings in subcortical auditory nuclei, auditory cortex neurons have traditionally been described as spiking only at the onsets of simple sounds such as pure tones or bandpass noise and to acoustic transients in complex sounds. Furthermore, primary auditory cortex (A1) has traditionally been described as mostly tone responsive and the lateral belt area of primates as mostly noise responsive. The present study was designed to unify the study of these two cortical areas using random spectrum stimuli (RSS), a new class of parametric, wideband, stationary acoustic stimuli. We found that 60% of all neurons encountered in A1 and the lateral belt of awake marmoset monkeys (Callithrix jacchus) showed significant changes in firing rates in response to RSS. Of these, 89% showed sustained spiking in response to one or more individual RSS, a substantially greater percentage than would be expected from traditional studies, indicating that RSS are well suited for studying these two cortical areas. When firing rates elicited by RSS were used to construct linear estimates of frequency tuning for these sustained responders, the shape of the estimate function remained relatively constant throughout the stimulus interval and across the stimulus properties of mean sound level, spectral density, and spectral contrast. This finding indicates that frequency tuning computed from RSS reflects a robust estimate of the actual tuning of a neuron. Use of this estimate to predict rate responses to other RSS, however, yielded poor results, implying that auditory cortex neurons integrate information across frequency nonlinearly. No systematic difference in prediction quality between A1 and the lateral belt could be detected.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Contrast Tuning in Auditory Cortex.\n \n \n \n\n\n \n Barbour, D. L.; and Wang, X.\n\n\n \n\n\n\n Science (New York, N.Y.), 299(5609): 1073–1075. February 2003.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_contrast_2003,\n\ttitle = {Contrast {Tuning} in {Auditory} {Cortex}},\n\tvolume = {299},\n\tissn = {1095-9203},\n\tdoi = {10.1126/science.1080425},\n\tabstract = {The acoustic features useful for converting auditory information into perceived objects are poorly understood. Although auditory cortex neurons have been described as being narrowly tuned and preferentially responsive to narrowband signals, naturally occurring sounds are generally wideband with unique spectral energy profiles. Through the use of parametric wideband acoustic stimuli, we found that such neurons in awake marmoset monkeys respond vigorously to wideband sounds having complex spectral shapes, preferring stimuli of either high or low spectral contrast. Low contrast-preferring neurons cannot be studied thoroughly with narrowband stimuli and have not been previously described. These findings indicate that spectral contrast reflects an important stimulus decomposition in auditory cortex and may contribute to the recognition of acoustic objects.},\n\tlanguage = {eng},\n\tnumber = {5609},\n\tjournal = {Science (New York, N.Y.)},\n\tauthor = {{Barbour, D. L.} and Wang, Xiaoqin},\n\tmonth = feb,\n\tyear = {2003},\n\tpmid = {12586943},\n\tpmcid = {PMC1868436},\n\tkeywords = {Acoustic Stimulation, Action Potentials, Animals, Auditory Cortex, Auditory Perception, Callithrix, Neurons},\n\tpages = {1073--1075},\n}\n\n
\n
\n\n\n
\n The acoustic features useful for converting auditory information into perceived objects are poorly understood. Although auditory cortex neurons have been described as being narrowly tuned and preferentially responsive to narrowband signals, naturally occurring sounds are generally wideband with unique spectral energy profiles. Through the use of parametric wideband acoustic stimuli, we found that such neurons in awake marmoset monkeys respond vigorously to wideband sounds having complex spectral shapes, preferring stimuli of either high or low spectral contrast. Low contrast-preferring neurons cannot be studied thoroughly with narrowband stimuli and have not been previously described. These findings indicate that spectral contrast reflects an important stimulus decomposition in auditory cortex and may contribute to the recognition of acoustic objects.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2002\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Temporal Coherence Sensitivity in Auditory Cortex.\n \n \n \n\n\n \n Barbour, D. L.; and Wang, X.\n\n\n \n\n\n\n Journal of Neurophysiology, 88(5): 2684–2699. November 2002.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{barbour_d_l_temporal_2002,\n\ttitle = {Temporal {Coherence} {Sensitivity} in {Auditory} {Cortex}},\n\tvolume = {88},\n\tissn = {0022-3077},\n\tdoi = {10.1152/jn.00253.2002},\n\tabstract = {Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally unmatched to the on-CF input(s) and respond in a fashion that could be utilized by the auditory system to segregate natural sounds containing similar spectral components (such as vocalizations from multiple conspecifics) based on stimulus coherence.},\n\tlanguage = {eng},\n\tnumber = {5},\n\tjournal = {Journal of Neurophysiology},\n\tauthor = {{Barbour, D. L.} and Wang, Xiaoqin},\n\tmonth = nov,\n\tyear = {2002},\n\tpmid = {12424304},\n\tkeywords = {Acoustic Stimulation, Algorithms, Animals, Auditory Cortex, Callithrix, Electrodes, Implanted, Electrophysiology, Evoked Potentials, Auditory, Microelectrodes, Nerve Net, Neurons, Pitch Discrimination, Time Perception},\n\tpages = {2684--2699},\n}\n\n
\n
\n\n\n
\n Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally unmatched to the on-CF input(s) and respond in a fashion that could be utilized by the auditory system to segregate natural sounds containing similar spectral components (such as vocalizations from multiple conspecifics) based on stimulus coherence.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);