var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=mila&showSearch=true&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=mila&showSearch=true&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=mila&showSearch=true&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Eye movement patterns when playing from memory: Examining consistency across repeated performances and the relationship between eyes and audio.\n \n \n\n\n \n Fink, L. K\n\n\n \n\n\n\n In Proceedings of the International Conference on Music Perception and Cognition, ICMPC17-APSCOM7, Tokyo, August24-28,2023, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Eye link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{fink2023mobile,\n  title={Eye movement patterns when playing from memory: Examining consistency across repeated performances and the relationship between eyes and audio},\n  author={Fink, Lauren K},\n  booktitle={Proceedings of the International Conference on Music Perception and Cognition, ICMPC17-APSCOM7, Tokyo, August24-28,2023},\n  pages={},\n  year={2023},\n  organization={},\n  url_Link={https://doi.org/10.31234/osf.io/tecdv},\n  abstract={While the eyes serve an obvious function in the context of music reading, their role during memorized music performance (i.e., when there is no score) is currently unknown. Given previous work showing relationships between eye movements and body movements and eye movements and memory retrieval, here I ask 1) whether eye movements become a stable aspect of the memorized music (motor) performance, and 2) whether the structure of the music is reflected in eye movement patterns. In this case study, three pianists chose two pieces to play from memory. They came into the lab on four different days, separated by at least 12hrs, and played their two pieces three times each. To answer 1), I compared dynamic time warping cost within vs. between pieces, and found significantly lower warping costs within piece, for both horizontal and vertical eye movement time series, providing a first proof-of-concept that eye movement patterns are conserved across repeated memorized music performances. To answer 2), I used the Matrix Profiles of the eye movement time series to automatically detect motifs (repeated patterns). By then analyzing participants’ recorded audio at moments of detected ocular motifs, repeated sections of music could be identified (confirmed auditorily and with inspection of the extracted pitch and amplitude envelopes of the indexed audio snippets). Overall, the current methods provide a promising approach for future studies of music performance, enabling exploration of the relationship between body movements, eye movements, and musical processing.} \n}\n\n
\n
\n\n\n
\n While the eyes serve an obvious function in the context of music reading, their role during memorized music performance (i.e., when there is no score) is currently unknown. Given previous work showing relationships between eye movements and body movements and eye movements and memory retrieval, here I ask 1) whether eye movements become a stable aspect of the memorized music (motor) performance, and 2) whether the structure of the music is reflected in eye movement patterns. In this case study, three pianists chose two pieces to play from memory. They came into the lab on four different days, separated by at least 12hrs, and played their two pieces three times each. To answer 1), I compared dynamic time warping cost within vs. between pieces, and found significantly lower warping costs within piece, for both horizontal and vertical eye movement time series, providing a first proof-of-concept that eye movement patterns are conserved across repeated memorized music performances. To answer 2), I used the Matrix Profiles of the eye movement time series to automatically detect motifs (repeated patterns). By then analyzing participants’ recorded audio at moments of detected ocular motifs, repeated sections of music could be identified (confirmed auditorily and with inspection of the extracted pitch and amplitude envelopes of the indexed audio snippets). Overall, the current methods provide a promising approach for future studies of music performance, enabling exploration of the relationship between body movements, eye movements, and musical processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Deep learning models for webcam eye-tracking in online experiments.\n \n \n\n\n \n Saxena, S.; Fink, L. K; and Lange, E. B\n\n\n \n\n\n\n Behavior Research Methods. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Deep link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{saxena2023deep,\n  title={Deep learning models for webcam eye-tracking in online experiments},\n  author={Saxena, Shreshth and Fink, Lauren K and Lange, Elke B},\n  journal={Behavior Research Methods},\n  year={2023},\n  url_Link={https://doi.org/10.3758/s13428-023-02190-6},\n  abstract={Eye-tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye-tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle important challenges faced in remote eye-tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own home and laptop, 65 participants performed a battery of eye-tracking tasks requiring different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We improve the state-of-the-art for eye-tracking during online experiments with an accuracy of 2.4° and precision of 0.47° which reduces the gap between lab-based and online eye-tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye-tracking.}\n}\n\n
\n
\n\n\n
\n Eye-tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye-tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle important challenges faced in remote eye-tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own home and laptop, 65 participants performed a battery of eye-tracking tasks requiring different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We improve the state-of-the-art for eye-tracking during online experiments with an accuracy of 2.4° and precision of 0.47° which reduces the gap between lab-based and online eye-tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye-tracking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Aesthetic and physiological effects of naturalistic multimodal music listening.\n \n \n\n\n \n Czepiel, A.; Fink, L. K; Seibert, C.; Scharinger, M.; and Kotz, S. A\n\n\n \n\n\n\n Cognition, 239: 105537. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Aesthetic link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{czepiel2023aesthetic,\n  title={Aesthetic and physiological effects of naturalistic multimodal music listening},\n  author={Czepiel, Anna and Fink, Lauren K and Seibert, Christoph and Scharinger, Mathias and Kotz, Sonja A},\n  journal={Cognition},\n  volume={239},\n  pages={105537},\n  year={2023},\n  url_Link={https://doi.org/10.1016/j.cognition.2023.105537},\n  publisher={Elsevier},\n  abstract={Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardio-respiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). AE was significantly higher in the AV condition in both experiments. Physiological arousal indices – skin conductance and LF/HF ratio, which represent activation of the sympathetic nervous system – were higher in the AO condition, suggesting increased arousal, perhaps because sound onsets in the AO condition were less predictable. However, breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer’s movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus (‘smiling’) muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., skin conductance and heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a more naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.}\n}\n\n
\n
\n\n\n
\n Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardio-respiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). AE was significantly higher in the AV condition in both experiments. Physiological arousal indices – skin conductance and LF/HF ratio, which represent activation of the sympathetic nervous system – were higher in the AO condition, suggesting increased arousal, perhaps because sound onsets in the AO condition were less predictable. However, breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer’s movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus (‘smiling’) muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., skin conductance and heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a more naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n From pre-processing to advanced dynamic modeling of pupil data.\n \n \n\n\n \n Fink, L. K.; Simola, J.; Tavano, A.; Lange, E.; Wallot, S.; and Laeng, B.\n\n\n \n\n\n\n Behavior Research Methods. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"From link\n  \n \n\n \n \n doi\n  \n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2023pre,\n  title={From pre-processing to advanced dynamic modeling of pupil data},\n  author={Fink, Lauren K. and Simola, Jaana and Tavano, Alessandro and Lange, Elke and Wallot, Sebastian and Laeng, Bruno},\n  journal={Behavior Research Methods},\n  pages={},\n  year={2023},\n  doi={https://doi.org/10.3758/s13428-023-02098-1},\n  url_Link = {https://doi.org/10.3758/s13428-023-02098-1},\n  abstract={The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.}\n}\n\n
\n
\n\n\n
\n The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Eye-blinking, musical processing, and subjective states – A methods account.\n \n \n\n\n \n Lange, E.; and Fink, L.\n\n\n \n\n\n\n Psychophysiology, 60(e14350). 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Eye-blinking, link\n  \n \n\n \n \n doi\n  \n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lange2023blink, \n  title={Eye-blinking, musical processing, and subjective states – A methods account},\n  author={Lange, Elke and Fink, Lauren},\n  journal={Psychophysiology},\n  pages={},\n  volume={60},\n  number={e14350},\n  year={2023},\n  doi={https://doi.org/10.1111/psyp.14350},\n  url_Link={https://doi.org/10.1111/psyp.14350},\n  abstract={Affective sciences often make use of self-reports to assess subjective states. Seeking a more implicit measure for states and emotions, our study explored spontaneous eye blinking during music listening. However, blinking is understudied in the context of research on subjective states. Therefore, a second goal was to explore different ways of analyzing blink activity recorded from infra-red eye trackers, using two additional data sets from earlier studies differing in blinking and viewing instructions. We first replicate the effect of increased blink rates during music listening in comparison with silence and show that the effect is not related to changes in self-reported valence, arousal, or to specific musical features. Interestingly, but in contrast, felt absorption reduced participants' blinking. The instruction to inhibit blinking did not change results. From a methodological perspective, we make suggestions about how to define blinks from data loss periods recorded by eye trackers and report a data-driven outlier rejection procedure and its efficiency for subject-mean analyses, as well as trial-based analyses. We ran a variety of mixed effects models that differed in how trials without blinking were treated. The main results largely converged across accounts. The broad consistency of results across different experiments, outlier treatments, and statistical models demonstrates the reliability of the reported effects. As recordings of data loss periods come for free when interested in eye movements or pupillometry, we encourage researchers to pay attention to blink activity and contribute to the further understanding of the relation between blinking, subjective states, and cognitive processing.},\n}\n\n
\n
\n\n\n
\n Affective sciences often make use of self-reports to assess subjective states. Seeking a more implicit measure for states and emotions, our study explored spontaneous eye blinking during music listening. However, blinking is understudied in the context of research on subjective states. Therefore, a second goal was to explore different ways of analyzing blink activity recorded from infra-red eye trackers, using two additional data sets from earlier studies differing in blinking and viewing instructions. We first replicate the effect of increased blink rates during music listening in comparison with silence and show that the effect is not related to changes in self-reported valence, arousal, or to specific musical features. Interestingly, but in contrast, felt absorption reduced participants' blinking. The instruction to inhibit blinking did not change results. From a methodological perspective, we make suggestions about how to define blinks from data loss periods recorded by eye trackers and report a data-driven outlier rejection procedure and its efficiency for subject-mean analyses, as well as trial-based analyses. We ran a variety of mixed effects models that differed in how trials without blinking were treated. The main results largely converged across accounts. The broad consistency of results across different experiments, outlier treatments, and statistical models demonstrates the reliability of the reported effects. As recordings of data loss periods come for free when interested in eye movements or pupillometry, we encourage researchers to pay attention to blink activity and contribute to the further understanding of the relation between blinking, subjective states, and cognitive processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses.\n \n \n\n\n \n Coretta, S.; Casillas, J. V; Roessig, S.; Franke, M.; Ahn, B.; Al-Hoorie, A. H; Al-Tamimi, J.; Alotaibi, N. E; AlShakhori, M. K; Altmiller, R. M; and others\n\n\n \n\n\n\n Advances in Methods and Practices in Psychological Sciences. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Multidimensional link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{coretta2023multidimensional,\n  title={Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses},\n  author={Coretta, Stefano and Casillas, Joseph V and Roessig, Simon and Franke, Michael and Ahn, Byron and Al-Hoorie, Ali H and Al-Tamimi, Jalal and Alotaibi, Najd E and AlShakhori, Mohammed K and Altmiller, Ruth M and others},\n  journal={Advances in Methods and Practices in Psychological Sciences},\n  year={2023},\n  publisher={SAGE},\n  url_Link={https://psyarxiv.com/q8t2k/},\n  abstract={Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions.}\n}\n\n\n\n
\n
\n\n\n
\n Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Towards efficient calibration for webcam eye-tracking in online experiments.\n \n \n\n\n \n Saxena, S.; Lange, E.; and Fink, L.\n\n\n \n\n\n\n In 2022 Symposium on Eye Tracking Research and Applications, pages 1–7, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Towards link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{saxena2022towards,\n  title={Towards efficient calibration for webcam eye-tracking in online experiments},\n  author={Saxena, Shreshth and Lange, Elke and Fink, Lauren},\n  booktitle={2022 Symposium on Eye Tracking Research and Applications},\n  pages={1--7},\n  year={2022},\n  url_Link={https://doi.org/10.1145/3517031.3529645},\n  abstract={Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.}\n}\n\n
\n
\n\n\n
\n Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n The Groove Enhancement Machine (GEM): A multi-person adaptive metronome to manipulate sensorimotor synchronization and subjective enjoyment.\n \n \n\n\n \n Fink, L. K; Alexander, P.; and Janata, P.\n\n\n \n\n\n\n Frontiers in Human Neuroscience, 16(916551). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2022groove,\n  title={The Groove Enhancement Machine (GEM): A multi-person adaptive metronome to manipulate sensorimotor synchronization and subjective enjoyment},\n  author={Fink, Lauren K and Alexander, Prescott and Janata, Petr},\n  journal={Frontiers in Human Neuroscience},\n  volume={16},\n  number={916551},\n  pages={},\n  year={2022},\n  publisher={Frontiers},\n  url_Link={https://doi.org/10.3389/fnhum.2022.916551},\n  abstract={Synchronization of movement enhances cooperation and trust between people. However, the degree to which individuals can synchronize with each other depends on their ability to perceive the timing of others’ actions and produce movements accordingly. Here, we introduce an assistive device—a multi-person adaptive metronome—to facilitate synchronization abilities. The adaptive metronome is implemented on Arduino Uno circuit boards, allowing for negligible temporal latency between tapper input and adaptive sonic output. Across five experiments—two single-tapper, and three group (four tapper) experiments, we analyzed the effects of metronome adaptivity (percent correction based on the immediately preceding tap-metronome asynchrony) and auditory feedback on tapping performance and subjective ratings. In all experiments, tapper synchronization with the metronome was significantly enhanced with 25–50% adaptivity, compared to no adaptation. In group experiments with auditory feedback, synchrony remained enhanced even at 70–100% adaptivity; without feedback, synchrony at these high adaptivity levels returned to near baseline. Subjective ratings of being in the groove, in synchrony with the\nmetronome, in synchrony with others, liking the task, and difficulty all reduced to one latent factor, which we termed enjoyment. This same factor structure replicated across all experiments. In predicting enjoyment, we found an interaction between auditory feedback and metronome adaptivity, with increased enjoyment at optimal levels of adaptivity only with auditory feedback and a severe decrease in enjoyment at higher levels of adaptivity, especially without feedback. Exploratory analyses relating person-level variables to tapping performance showed that musical sophistication and trait sadness contributed to the degree to which an individual differed in tapping\nstability from the group. Nonetheless, individuals and groups benefitted from adaptivity, regardless of their musical sophistication. Further, individuals who tapped less variably than the group (which only occurred  25% of the time) were more likely to feel “in the groove.” Overall, this work replicates previous single person adaptive metronome studies and extends them to group contexts, thereby contributing to our understanding of the temporal, auditory, psychological, and personal factors underlying interpersonal synchrony and subjective enjoyment during sensorimotor interaction. Further, it provides an open-source tool for studying such factors in a controlled way.}\n}\n\n
\n
\n\n\n
\n Synchronization of movement enhances cooperation and trust between people. However, the degree to which individuals can synchronize with each other depends on their ability to perceive the timing of others’ actions and produce movements accordingly. Here, we introduce an assistive device—a multi-person adaptive metronome—to facilitate synchronization abilities. The adaptive metronome is implemented on Arduino Uno circuit boards, allowing for negligible temporal latency between tapper input and adaptive sonic output. Across five experiments—two single-tapper, and three group (four tapper) experiments, we analyzed the effects of metronome adaptivity (percent correction based on the immediately preceding tap-metronome asynchrony) and auditory feedback on tapping performance and subjective ratings. In all experiments, tapper synchronization with the metronome was significantly enhanced with 25–50% adaptivity, compared to no adaptation. In group experiments with auditory feedback, synchrony remained enhanced even at 70–100% adaptivity; without feedback, synchrony at these high adaptivity levels returned to near baseline. Subjective ratings of being in the groove, in synchrony with the metronome, in synchrony with others, liking the task, and difficulty all reduced to one latent factor, which we termed enjoyment. This same factor structure replicated across all experiments. In predicting enjoyment, we found an interaction between auditory feedback and metronome adaptivity, with increased enjoyment at optimal levels of adaptivity only with auditory feedback and a severe decrease in enjoyment at higher levels of adaptivity, especially without feedback. Exploratory analyses relating person-level variables to tapping performance showed that musical sophistication and trait sadness contributed to the degree to which an individual differed in tapping stability from the group. Nonetheless, individuals and groups benefitted from adaptivity, regardless of their musical sophistication. Further, individuals who tapped less variably than the group (which only occurred  25% of the time) were more likely to feel “in the groove.” Overall, this work replicates previous single person adaptive metronome studies and extends them to group contexts, thereby contributing to our understanding of the temporal, auditory, psychological, and personal factors underlying interpersonal synchrony and subjective enjoyment during sensorimotor interaction. Further, it provides an open-source tool for studying such factors in a controlled way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Making what we know explicit: Perspectives from graduate writing consultants on supporting graduate writers.\n \n \n\n\n \n Wittstock, S.; Kirk, G.; de Sola-Smith, K.; Simon, M.; Sperber, L.; McCarty, K.; Wade, J.; and Fink, L.\n\n\n \n\n\n\n Praxis, 19(2). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Making link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{wittstock2022making,\n  title={Making what we know explicit: Perspectives from graduate writing consultants on supporting graduate writers},\n  author={Wittstock, Stacy and Kirk, Gaby and de Sola-Smith, Karen and Simon, Mitchell and Sperber, Lisa and McCarty, Kristin and Wade, Jasmine and Fink, Lauren},\n  journal={Praxis},\n  volume={19},\n  number={2},\n  year={2022},\n  url_Link={https://www.praxisuwc.com/192-wittstock-et-al},\n  publisher={https://www.praxisuwc.com/192-wittstock-et-al},\n  abstract={While scholarship on supporting graduate writers in the writing center has increased in recent years, guides outlining best practices for writing center consultants rarely speak to graduate students working with other graduate writers. In this article, we present a practical guide for graduate writing consultants. Written collaboratively by graduate writing consultants and a program coordinator, this guide represents our collective knowledge built over several years of conducting writing consultations and professional development in graduate writing support. Inspired by Adler-Kassner and Wardle’s “threshold concepts,” our guide is organized around two fundamental ideas: 1) that positionality plays an important role in interactions between consultants and graduate writers, and 2) that consultants must cultivate disciplinary awareness to be successful graduate writing coaches. In each section, we synthesize our own experiences as graduate writers and consultants with writing studies scholarship, and present concrete strategies for conducting graduate-level writing consultations. Through this guide, we demonstrate the mutual benefit of involving graduate student writing consultants in the production of knowledge in writing centers. }\n}\n\n
\n
\n\n\n
\n While scholarship on supporting graduate writers in the writing center has increased in recent years, guides outlining best practices for writing center consultants rarely speak to graduate students working with other graduate writers. In this article, we present a practical guide for graduate writing consultants. Written collaboratively by graduate writing consultants and a program coordinator, this guide represents our collective knowledge built over several years of conducting writing consultations and professional development in graduate writing support. Inspired by Adler-Kassner and Wardle’s “threshold concepts,” our guide is organized around two fundamental ideas: 1) that positionality plays an important role in interactions between consultants and graduate writers, and 2) that consultants must cultivate disciplinary awareness to be successful graduate writing coaches. In each section, we synthesize our own experiences as graduate writers and consultants with writing studies scholarship, and present concrete strategies for conducting graduate-level writing consultations. Through this guide, we demonstrate the mutual benefit of involving graduate student writing consultants in the production of knowledge in writing centers. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Drums help us understand how we process speech and music.\n \n \n\n\n \n Fink, L.; Durojaye, C.; Roeske, T.; Wald-Fuhrmann, M; and Larrouy-Maestri, P.\n\n\n \n\n\n\n Frontiers for Young Minds, 10(755390). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Drums link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2022drums,\n  title={Drums help us understand how we process speech and music},\n  author={Fink, L. and Durojaye, C. and Roeske, T. and Wald-Fuhrmann, M and Larrouy-Maestri, P.},\n  journal={Frontiers for Young Minds},\n  volume={10},\n  number={755390},\n  year={2022},\n  url_Link={https://doi.org/10.3389/frym.2022.755390},\n  abstract={Every day, you hear many sounds in your environment, like speech, music, animal calls, or passing cars. How do you tease apart these unique categories of sounds? We aimed to understand more about how people distinguish speech and music by using an instrument that can both “speak” and play music: the dùndún talking drum. We were interested in whether people could tell if the sound produced by the drum was speech or music. People who were familiar with the dùndún were good at the task, but so were those who had never heard the dùndún, suggesting that there are general characteristics of sound that define speech and music categories. We observed that music is faster, more regular, and more variable in volume than “speech.” This research helps us understand the interesting instrument that is dùndún and provides insights about how humans distinguish two important types of sound: speech and music.}\n}\n\n\n\n\n\n
\n
\n\n\n
\n Every day, you hear many sounds in your environment, like speech, music, animal calls, or passing cars. How do you tease apart these unique categories of sounds? We aimed to understand more about how people distinguish speech and music by using an instrument that can both “speak” and play music: the dùndún talking drum. We were interested in whether people could tell if the sound produced by the drum was speech or music. People who were familiar with the dùndún were good at the task, but so were those who had never heard the dùndún, suggesting that there are general characteristics of sound that define speech and music categories. We observed that music is faster, more regular, and more variable in volume than “speech.” This research helps us understand the interesting instrument that is dùndún and provides insights about how humans distinguish two important types of sound: speech and music.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts.\n \n \n\n\n \n Czepiel, A.; Fink, L. K; Fink, L. T; Wald-Fuhrmann, M.; Tröndle, M.; and Merrill, J.\n\n\n \n\n\n\n Scientific Reports, 11(1): 1–16. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Synchrony link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{czepiel2021synchrony,\n  title={Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts},\n  author={Czepiel, Anna and Fink, Lauren K and Fink, Lea T and Wald-Fuhrmann, Melanie and Tr{\\"o}ndle, Martin and Merrill, Julia},\n  journal={Scientific Reports},\n  volume={11},\n  number={1},\n  pages={1--16},\n  year={2021},\n  publisher={Nature Publishing Group},\n  url_Link={https://doi.org/10.1038/s41598-021-00492-3},\n  abstract={While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music—identified using music theoretical analysis—namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.}\n}\n\n
\n
\n\n\n
\n While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music—identified using music theoretical analysis—namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Viral tunes: changes in musical behaviours and interest in coronamusic predict socio-emotional coping during COVID-19 lockdown.\n \n \n\n\n \n Fink, L. K; Warrenburg, L. A; Howlin, C.; Randall, W. M; Hansen, N. C.; and Wald-Fuhrmann, M.\n\n\n \n\n\n\n Humanities and Social Sciences Communications, 8(1): 1–11. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Viral link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2021viral,\n  title={Viral tunes: changes in musical behaviours and interest in coronamusic predict socio-emotional coping during COVID-19 lockdown},\n  author={Fink, Lauren K and Warrenburg, Lindsay A and Howlin, Claire and Randall, William M and Hansen, Niels Chr and Wald-Fuhrmann, Melanie},\n  journal={Humanities and Social Sciences Communications},\n  volume={8},\n  number={1},\n  pages={1--11},\n  year={2021},\n  publisher={Palgrave},\n  url_Link={https://doi.org/10.1057/s41599-021-00858-y},\n  abstract={Beyond immediate health risks, the COVID-19 pandemic poses a variety of stressors, which may require expensive or unavailable strategies during a pandemic (e.g., therapy, socialising). Here, we asked whether musical engagement is an effective strategy for socio-emotional coping. During the first lockdown period (April–May 2020), we surveyed changes in music listening and making behaviours of over 5000 people, with representative samples from three continents. More than half of respondents reported engaging with music to cope. People experiencing increased negative emotions used music for solitary emotional regulation, whereas people experiencing increased positive emotions used music as a proxy for social interaction. Light gradient-boosted regressor models were used to identify the most important predictors of an individual’s use of music to cope, the foremost of which was, intriguingly, their interest in “coronamusic.” Overall, our results emphasise the importance of real-time musical responses to societal crises, as well as individually tailored adaptations in musical behaviours to meet socio-emotional needs.}\n}\n\n
\n
\n\n\n
\n Beyond immediate health risks, the COVID-19 pandemic poses a variety of stressors, which may require expensive or unavailable strategies during a pandemic (e.g., therapy, socialising). Here, we asked whether musical engagement is an effective strategy for socio-emotional coping. During the first lockdown period (April–May 2020), we surveyed changes in music listening and making behaviours of over 5000 people, with representative samples from three continents. More than half of respondents reported engaging with music to cope. People experiencing increased negative emotions used music for solitary emotional regulation, whereas people experiencing increased positive emotions used music as a proxy for social interaction. Light gradient-boosted regressor models were used to identify the most important predictors of an individual’s use of music to cope, the foremost of which was, intriguingly, their interest in “coronamusic.” Overall, our results emphasise the importance of real-time musical responses to societal crises, as well as individually tailored adaptations in musical behaviours to meet socio-emotional needs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Perception of Nigerian dùndún talking drum performances as speech-like vs. music-like: The role of familiarity and acoustic cues.\n \n \n\n\n \n Durojaye, C.; Fink, L.; Roeske, T.; Wald-Fuhrmann, M.; and Larrouy-Maestri, P.\n\n\n \n\n\n\n Frontiers in Psychology, 12: 652673. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Perception link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{durojaye2021perception,\n  title={Perception of Nigerian d{\\`u}nd{\\'u}n talking drum performances as speech-like vs. music-like: The role of familiarity and acoustic cues},\n  author={Durojaye, Cecilia and Fink, Lauren and Roeske, Tina and Wald-Fuhrmann, Melanie and Larrouy-Maestri, Pauline},\n  journal={Frontiers in Psychology},\n  volume={12},\n  pages={652673},\n  year={2021},\n  publisher={Frontiers},\n  url_Link={https://doi.org/10.3389/fpsyg.2021.652673},\n  abstract={It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.}\n}\n\n
\n
\n\n\n
\n It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Computational models of temporal expectations.\n \n \n\n\n \n Fink, L. K\n\n\n \n\n\n\n In Proceedings of the Future Directions of Music Cognition International Conference, 6–7 March 2021, pages 208–213, 2021. Ohio State University Libraries\n \n\n\n\n
\n\n\n\n \n \n \"Computational link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{fink2021computational,\n  title={Computational models of temporal expectations},\n  author={Fink, Lauren K},\n  booktitle={Proceedings of the Future Directions of Music Cognition International Conference, 6--7 March 2021},\n  pages={208--213},\n  year={2021},\n  organization={Ohio State University Libraries},\n  url_Link={https://doi.org/10.18061/FDMC.2021.0041},\n  abstract={With Western, tonal music, the expectedness of any given note or chord can be estimated using various methodologies, from perceptual distance to information content. However, in the realm of rhythm and meter, the same sort of predictive capability is lacking. To date, most computational models have focused on predicting meter (a global cognitive framework for listening), rather than fluctuations in metric attention or expectations at each moment in time. This theoretical contribution reviews existing models, noting current capabilities and outlining necessities for future work.}\n}\n\n\n\n\n
\n
\n\n\n
\n With Western, tonal music, the expectedness of any given note or chord can be estimated using various methodologies, from perceptual distance to information content. However, in the realm of rhythm and meter, the same sort of predictive capability is lacking. To date, most computational models have focused on predicting meter (a global cognitive framework for listening), rather than fluctuations in metric attention or expectations at each moment in time. This theoretical contribution reviews existing models, noting current capabilities and outlining necessities for future work.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Neural harmonics of syntactic structure.\n \n \n\n\n \n Tavano, A.; Blohm, S.; Knoop, C. A.; Muralikrishnan, R; Fink, L.; Scharinger, M.; Wagner, V.; Thiele, D.; Ghitza, O.; Ding, N.; and others\n\n\n \n\n\n\n 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Neural link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{tavano2020neural,\n  title={Neural harmonics of syntactic structure},\n  author={Tavano, Alessandro and Blohm, Stefan and Knoop, Christine A. and Muralikrishnan, R and Fink, Lauren and Scharinger, Mathias and Wagner, Valentin and Thiele, Dominik and Ghitza, Oded and Ding, Nai and others},\n  journal={bioRxiv},\n  number={https://doi.org/10.1101/2020.04.08.03157},\n  year={2020},\n  url_Link={https://www.biorxiv.org/content/10.1101/2020.04.08.031575v3},\n  abstract={Can neural activity reveal syntactic structure building processes and their violations? To verify this, we recorded electroencephalographic and behavioral data as participants discriminated concatenated isochronous sentence chains containing only grammatical sentences (regular trials) from those containing ungrammatical sentences (irregular trials). We found that the repetition of abstract syntactic categories generates a harmonic structure of their period independently of stimulus rate, thereby separating endogenous from exogenous neural rhythms. Behavioral analyses confirmed this dissociation. Internal neural harmonics extracted from regular trials predicted participants’ grammatical sensitivity better than harmonics extracted from irregular trials, suggesting a direct reflection of grammatical sensitivity. Instead, entraining to external stimulus rate scaled with task sensitivity only when extracted from irregular trials, reflecting attention-capture processing. Neural harmonics to repeated syntactic categories constitute the first behaviorally relevant, purely internal index of syntactic competence.}\n}\n\n
\n
\n\n\n
\n Can neural activity reveal syntactic structure building processes and their violations? To verify this, we recorded electroencephalographic and behavioral data as participants discriminated concatenated isochronous sentence chains containing only grammatical sentences (regular trials) from those containing ungrammatical sentences (irregular trials). We found that the repetition of abstract syntactic categories generates a harmonic structure of their period independently of stimulus rate, thereby separating endogenous from exogenous neural rhythms. Behavioral analyses confirmed this dissociation. Internal neural harmonics extracted from regular trials predicted participants’ grammatical sensitivity better than harmonics extracted from irregular trials, suggesting a direct reflection of grammatical sensitivity. Instead, entraining to external stimulus rate scaled with task sensitivity only when extracted from irregular trials, reflecting attention-capture processing. Neural harmonics to repeated syntactic categories constitute the first behaviorally relevant, purely internal index of syntactic competence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Acoustic and linguistic features influence talker change detection.\n \n \n\n\n \n Sharma, N. K.; Krishnamohan, V.; Ganapathy, S.; Gangopadhayay, A.; and Fink, L.\n\n\n \n\n\n\n The Journal of the Acoustical Society of America, 148(5): EL414–EL419. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Acoustic link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{sharma2020acoustic,\n  title={Acoustic and linguistic features influence talker change detection},\n  author={Sharma, Neeraj Kumar and Krishnamohan, Venkat and Ganapathy, Sriram and Gangopadhayay, Ahana and Fink, Lauren},\n  journal={The Journal of the Acoustical Society of America},\n  volume={148},\n  number={5},\n  pages={EL414--EL419},\n  year={2020},\n  publisher={Acoustical Society of America},\n  url_Link={https://doi.org/10.1121/10.0002462},\n  abstract={A listening test is proposed in which human participants detect talker changes in two natural, multi-talker speech stimuli sets—a familiar language (English) and an unfamiliar language (Chinese). Miss rate, false-alarm rate, and response times (RT) showed a significant dependence on language familiarity. Linear regression modeling of RTs using diverse acoustic features derived from the stimuli showed recruitment of a pool of acoustic features for the talker change detection task. Further, benchmarking the same task against the state-of-the-art machine diarization system showed that the machine system achieves human parity for the familiar language but not for the unfamiliar language.}\n}\n\n
\n
\n\n\n
\n A listening test is proposed in which human participants detect talker changes in two natural, multi-talker speech stimuli sets—a familiar language (English) and an unfamiliar language (Chinese). Miss rate, false-alarm rate, and response times (RT) showed a significant dependence on language familiarity. Linear regression modeling of RTs using diverse acoustic features derived from the stimuli showed recruitment of a pool of acoustic features for the talker change detection task. Further, benchmarking the same task against the state-of-the-art machine diarization system showed that the machine system achieves human parity for the familiar language but not for the unfamiliar language.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n On The Impact of Language Familiarity in Talker Change Detection.\n \n \n\n\n \n Sharma, N.; Krishnamohan, V.; Ganapathy, S.; Gangopadhayay, A.; and Fink, L.\n\n\n \n\n\n\n In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6249–6253, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"On link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{sharma2020impact,\n  title={On The Impact of Language Familiarity in Talker Change Detection},\n  author={Sharma, Neeraj and Krishnamohan, Venkat and Ganapathy, Sriram and Gangopadhayay, Ahana and Fink, Lauren},\n  booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n  pages={6249--6253},\n  year={2020},\n  organization={IEEE},\n  url_Link={https://doi.org/10.1109/ICASSP40776.2020.9054294},\n  abstract={The ability to detect talker changes when listening to conversational speech is fundamental to perception and understanding of multitalker speech. In this paper, we propose an experimental paradigm to provide insights on the impact of language familiarity on talker\nchange detection. Two multi-talker speech stimulus sets, one in a language familiar to the listeners (English) and the other unfamiliar\n(Chinese), are created. A listening test is performed in which listeners indicate the number of talkers in the presented stimuli. Analysis of human performance shows statistically significant results for: (a) lower miss (and a higher false alarm) rate in familiar versus unfamiliar language, and (b) longer response time in familiar versus unfamiliar language. These results signify a link between perception of talker attributes and language proficiency. Subsequently, a machine system is designed to perform the same task. The system makes use of the current state-of-the-art diarization approach with x-vector embeddings. A performance comparison on the same stimulus set indicates that the machine system falls short of human performance by a huge margin, for both languages.}\n}\n\n\n\n\n
\n
\n\n\n
\n The ability to detect talker changes when listening to conversational speech is fundamental to perception and understanding of multitalker speech. In this paper, we propose an experimental paradigm to provide insights on the impact of language familiarity on talker change detection. Two multi-talker speech stimulus sets, one in a language familiar to the listeners (English) and the other unfamiliar (Chinese), are created. A listening test is performed in which listeners indicate the number of talkers in the presented stimuli. Analysis of human performance shows statistically significant results for: (a) lower miss (and a higher false alarm) rate in familiar versus unfamiliar language, and (b) longer response time in familiar versus unfamiliar language. These results signify a link between perception of talker attributes and language proficiency. Subsequently, a machine system is designed to perform the same task. The system makes use of the current state-of-the-art diarization approach with x-vector embeddings. A performance comparison on the same stimulus set indicates that the machine system falls short of human performance by a huge margin, for both languages.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Predicting sensorimotor synchronization and attention to music using a linear oscillator model, eye-tracking, and electroencephalography.\n \n \n\n\n \n FINK, L. K.\n\n\n \n\n\n\n Ph.D. Thesis, UNIVERSITY OF CALIFORNIA DAVIS, 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Predicting link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{fink2019predicting,\n  title={Predicting sensorimotor synchronization and attention to music using a linear oscillator model, eye-tracking, and electroencephalography},\n  author={FINK, LAUREN KATHLEEN},\n  year={2019},\n  school={UNIVERSITY OF CALIFORNIA DAVIS},\n  url_Link={https://www.proquest.com/openview/4f395d7c6734b047d2ee4aa2fe7f6ac6/1.pdf?pq-origsite=gscholar&cbl=18750&diss=y},\n  abstract={Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In the following studies, we employ a computational model to predict fluctuations in attention as a function of temporal structure. We test our predictions against behavioral indices of attention, namely perceptual thresholds (Chs. 2 & 3) and subjective reports (Ch 4), as well as neural markers of attention – changes in pupil size (Chs. 2-4) and cortical activity recorded via electroencephalography (EEG; Ch 3). Chapter 1 highlights key theories regarding the cognitive and neurophysiological underpinnings of changes in pupil size to situate a discussion of the pupillary response to sound in future chapters. Chapters 2 and 3 detail the results of variations on an adaptive thresholding experiment in which participants detect deviants embedded into rhythmic patterns at multiple temporal locations. In both intensity increment (Ch 2) and decrement (Ch 3) versions of the experiment, we observed 1) perceptual thresholds vary as a function of output from the computational model, 2) a pupil dilation response to detected and missed (below perceptual threshold) deviants, with evoked pupil amplitude predicting participants’ responses. Chapter 3 discusses preliminary EEG results and the relationship between pupillary and cortical indices of auditory attention. In chapters 2 and 3 we also analyze the continuous pupillary response to the various rhythmic patterns and show entrainment to predicted prominent periodicities, as well as coherence between the pupil signal and the modelled temporal salience predictions. We extend these findings in Chapter 4, showing pupillary entrainment to complex, ‘real-world’ music that is predicted by participants’ absorption and familiarity ratings. We conclude that the model is relevant in predicting the temporal salience of complex stimuli and that the continuous pupillary signal can reveal psychologically relevant, fine-grained information about an attended auditory stimulus. }\n}\n\n
\n
\n\n\n
\n Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In the following studies, we employ a computational model to predict fluctuations in attention as a function of temporal structure. We test our predictions against behavioral indices of attention, namely perceptual thresholds (Chs. 2 & 3) and subjective reports (Ch 4), as well as neural markers of attention – changes in pupil size (Chs. 2-4) and cortical activity recorded via electroencephalography (EEG; Ch 3). Chapter 1 highlights key theories regarding the cognitive and neurophysiological underpinnings of changes in pupil size to situate a discussion of the pupillary response to sound in future chapters. Chapters 2 and 3 detail the results of variations on an adaptive thresholding experiment in which participants detect deviants embedded into rhythmic patterns at multiple temporal locations. In both intensity increment (Ch 2) and decrement (Ch 3) versions of the experiment, we observed 1) perceptual thresholds vary as a function of output from the computational model, 2) a pupil dilation response to detected and missed (below perceptual threshold) deviants, with evoked pupil amplitude predicting participants’ responses. Chapter 3 discusses preliminary EEG results and the relationship between pupillary and cortical indices of auditory attention. In chapters 2 and 3 we also analyze the continuous pupillary response to the various rhythmic patterns and show entrainment to predicted prominent periodicities, as well as coherence between the pupil signal and the modelled temporal salience predictions. We extend these findings in Chapter 4, showing pupillary entrainment to complex, ‘real-world’ music that is predicted by participants’ absorption and familiarity ratings. We conclude that the model is relevant in predicting the temporal salience of complex stimuli and that the continuous pupillary signal can reveal psychologically relevant, fine-grained information about an attended auditory stimulus. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n The application of eye-tracking in music research.\n \n \n\n\n \n Fink, L. K; Lange, E. B; and Groner, R.\n\n\n \n\n\n\n Journal of Eye Movement Research, 11(2): 1. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2019application,\n  title={The application of eye-tracking in music research},\n  author={Fink, Lauren K and Lange, Elke B and Groner, Rudolf},\n  journal={Journal of Eye Movement Research},\n  volume={11},\n  number={2},\n  pages={1},\n  year={2019},\n  url_Link={https://doi.org/10.16910/jemr.11.2.1},\n  abstract={Though eye-tracking is typically a methodology applied in the visual research domain, recent studies suggest its relevance in the context of music research. There exists a communityof researchers interested in this kind of research from varied disciplinary backgrounds scattered across the globe. Therefore, in August 2017, an international conference was held at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany,to bring this research community together. The conference was dedicated to the topic of music and eye-tracking, asking the question: what do eye movements, pupil dilation, and blinking activity tell us about musical processing? This special issue is constituted of top-scoring research from the conference and spans a range of music-related topics. From tracking the gaze of performers in musical trios to basic research on how eye movements are affected by background music, the contents of this special issue highlight a variety of experimental approaches and possible applications of eye-tracking in music research.}\n}\n\n\n\n
\n
\n\n\n
\n Though eye-tracking is typically a methodology applied in the visual research domain, recent studies suggest its relevance in the context of music research. There exists a communityof researchers interested in this kind of research from varied disciplinary backgrounds scattered across the globe. Therefore, in August 2017, an international conference was held at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany,to bring this research community together. The conference was dedicated to the topic of music and eye-tracking, asking the question: what do eye movements, pupil dilation, and blinking activity tell us about musical processing? This special issue is constituted of top-scoring research from the conference and spans a range of music-related topics. From tracking the gaze of performers in musical trios to basic research on how eye movements are affected by background music, the contents of this special issue highlight a variety of experimental approaches and possible applications of eye-tracking in music research.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n A linear oscillator model predicts dynamic temporal attention and pupillary entrainment to rhythmic patterns.\n \n \n\n\n \n Fink, L. K; Hurley, B. K; Geng, J. J; and Janata, P.\n\n\n \n\n\n\n Journal of Eye Movement Research, 11(2): 12. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2018linear,\n  title={A linear oscillator model predicts dynamic temporal attention and pupillary entrainment to rhythmic patterns},\n  author={Fink, Lauren K and Hurley, Brian K and Geng, Joy J and Janata, Petr},\n  journal={Journal of Eye Movement Research},\n  volume={11},\n  number={2},\n  pages={12},\n  year={2018},\n  url_Link={https://doi.org/10.16910/jemr.11.2.12},\n  abstract={Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this paper, we assess the potential of a stimulus-driven linear oscillator model to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We use perceptual thresholds and pupillometry as attentional indices against which to test our model predictions. During a deviance detection task, participants listened to continuously looping, multiinstrument, rhythmic patterns, while being eye-tracked. Their task was to respond anytime they heard an increase in intensity (dB SPL). An adaptive thresholding algorithm adjusted deviant intensity at multiple probed temporal locations throughout each rhythmic stimulus. The oscillator model predicted participants’ perceptual thresholds for detecting deviants at probed locations, with a low temporal salience prediction corresponding to a high perceptual threshold and vice versa. A pupil dilation response was observed for all deviants. Notably, the pupil dilated even when participants did not report hearing a deviant. Maximum pupil size and resonator model output were significant predictors of whether a deviant was detected or missed on any given trial. Besides the evoked pupillary response to deviants, we also assessed the continuous pupillary signal to the rhythmic patterns. The pupil exhibited entrainment at prominent periodicities present in the stimuli and followed each of the different rhythmic patterns in a unique way. Overall, these results replicate previous studies using the linear oscillator model to predict dynamic attention to complex auditory scenes and extend the utility of the model to the prediction of neurophysiological signals, in this case the pupillary time course; however, we note that the amplitude envelope of the acoustic patterns may serve as a similarly useful predictor. To our knowledge, this is the first paper to show entrainment of pupil dynamics by demonstrating a phase relationship between musical stimuli and the pupillary signal.}\n}\n\n
\n
\n\n\n
\n Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this paper, we assess the potential of a stimulus-driven linear oscillator model to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We use perceptual thresholds and pupillometry as attentional indices against which to test our model predictions. During a deviance detection task, participants listened to continuously looping, multiinstrument, rhythmic patterns, while being eye-tracked. Their task was to respond anytime they heard an increase in intensity (dB SPL). An adaptive thresholding algorithm adjusted deviant intensity at multiple probed temporal locations throughout each rhythmic stimulus. The oscillator model predicted participants’ perceptual thresholds for detecting deviants at probed locations, with a low temporal salience prediction corresponding to a high perceptual threshold and vice versa. A pupil dilation response was observed for all deviants. Notably, the pupil dilated even when participants did not report hearing a deviant. Maximum pupil size and resonator model output were significant predictors of whether a deviant was detected or missed on any given trial. Besides the evoked pupillary response to deviants, we also assessed the continuous pupillary signal to the rhythmic patterns. The pupil exhibited entrainment at prominent periodicities present in the stimuli and followed each of the different rhythmic patterns in a unique way. Overall, these results replicate previous studies using the linear oscillator model to predict dynamic attention to complex auditory scenes and extend the utility of the model to the prediction of neurophysiological signals, in this case the pupillary time course; however, we note that the amplitude envelope of the acoustic patterns may serve as a similarly useful predictor. To our knowledge, this is the first paper to show entrainment of pupil dynamics by demonstrating a phase relationship between musical stimuli and the pupillary signal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Mapping the dynamic allocation of temporal attention in musical patterns.\n \n \n\n\n \n Hurley, B. K; Fink, L. K; and Janata, P.\n\n\n \n\n\n\n Journal of Experimental Psychology: Human Perception and Performance, 44(11): 1694. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Mapping link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{hurley2018mapping,\n  title={Mapping the dynamic allocation of temporal attention in musical patterns.},\n  author={Hurley, Brian K and Fink, Lauren K and Janata, Petr},\n  journal={Journal of Experimental Psychology: Human Perception and Performance},\n  volume={44},\n  number={11},\n  pages={1694},\n  year={2018},\n  url_Link={https://doi.org/10.1037/xhp0000563},\n  publisher={American Psychological Association},\n  abstract={Many environmental sounds, such as music or speech, are patterned in time. Dynamic attending theory, and supporting empirical evidence, suggests that a stimulus’s temporal structure serves to orient attention to specific moments in time. One instantiation of this theory posits that attention synchronizes to the temporal structure of a stimulus in an oscillatory fashion, with optimal perception at salient time points or oscillation peaks. We examined whether a model consisting of damped linear oscillators succeeds at predicting temporal attention behavior in rhythmic multi-instrumental music. We conducted 3 experiments in which we mapped listeners’ perceptual sensitivity by estimating detection thresholds for intensity deviants embedded at multiple time points within a stimulus pattern. We compared participants’ thresholds for detecting intensity changes at various time points with the modeled salience prediction at each of those time points. Across all experiments, results showed that the resonator model predicted listener thresholds, such that listeners were more sensitive to probes at time points corresponding to greater model-predicted salience. This effect held for both intensity increment and decrement probes and for metrically simple and complex stimuli. Moreover, the resonator model explained the data better than did predictions based on canonical metric hierarchy or auditory scene density. Our results offer new insight into the temporal orienting of attention in complex auditory scenes using a parsimonious computational model for predicting attentional dynamics.}\n}\n\n\n\n\n\n\n\n
\n
\n\n\n
\n Many environmental sounds, such as music or speech, are patterned in time. Dynamic attending theory, and supporting empirical evidence, suggests that a stimulus’s temporal structure serves to orient attention to specific moments in time. One instantiation of this theory posits that attention synchronizes to the temporal structure of a stimulus in an oscillatory fashion, with optimal perception at salient time points or oscillation peaks. We examined whether a model consisting of damped linear oscillators succeeds at predicting temporal attention behavior in rhythmic multi-instrumental music. We conducted 3 experiments in which we mapped listeners’ perceptual sensitivity by estimating detection thresholds for intensity deviants embedded at multiple time points within a stimulus pattern. We compared participants’ thresholds for detecting intensity changes at various time points with the modeled salience prediction at each of those time points. Across all experiments, results showed that the resonator model predicted listener thresholds, such that listeners were more sensitive to probes at time points corresponding to greater model-predicted salience. This effect held for both intensity increment and decrement probes and for metrically simple and complex stimuli. Moreover, the resonator model explained the data better than did predictions based on canonical metric hierarchy or auditory scene density. Our results offer new insight into the temporal orienting of attention in complex auditory scenes using a parsimonious computational model for predicting attentional dynamics.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Allen Otte Folio.\n \n \n\n\n \n Lane, J.; and Fink, L.\n\n\n \n\n\n\n 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Allen link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{lane2017allen,\n  title={Allen Otte Folio},\n  author={Lane, John and Fink, Lauren},\n  year={2017},\n  url_Link={https://mediapressmusic.com/allen-otte-folio-various/},\n  publisher={Media Press; https://mediapressmusic.com/allen-otte-folio-various/},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n Chance operations in neuroscience.\n \n \n\n\n \n Fink, L. K.\n\n\n \n\n\n\n In Allen Otte Folio, volume 1, pages 17–20. Media Press, 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Chance link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{fink2017chance,\n  title={Chance operations in neuroscience},\n  author={Fink, Lauren K.},\n  booktitle={Allen Otte Folio},\n  volume={1},\n  pages={17--20},\n  year={2017},\n  url_Link={https://mediapressmusic.com/allen-otte-folio-various/},\n  publisher={Media Press}\n}\n\n\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n The greatest.\n \n \n\n\n \n Fink, L. K.\n\n\n \n\n\n\n Ethnomusicology Review, 22(2). 2016.\n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2016greatest,\n  title={The greatest},\n  author={Fink, Lauren K.},\n  journal={Ethnomusicology Review},\n  volume={22},\n  number={2},\n  year={2016},\n  url_Link={https://ethnomusicologyreview.ucla.edu/content/greatest},\n  publisher={https://ethnomusicologyreview.ucla.edu/content/greatest},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Music modulates eyeblinks: An examination of temporal coordination.\n \n \n\n\n \n Fink, L.\n\n\n \n\n\n\n Ph.D. Thesis, University of Cambridge, 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Music link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{fink2014music,\n  title={Music modulates eyeblinks: An examination of temporal coordination},\n  author={Fink, Lauren},\n  year={2014},\n  school={University of Cambridge},\n  url_Link={https://doi.org/10.13140/RG.2.2.10645.65766},\n  abstract={Eyeblinks have yet to attract significant attention in music cognition research, though they have been studied extensively in other domains. Rather than an artifact to be removed in eye tracking or EEG data, eyeblinks, and their connection with musical behaviors, warrant proper exploration. \nBackground: Eyeblinks tend to occur at structurally salient breaks during both reading and speech; they are likely to occur at the ends of sentences and paragraphs in a text, or at pauses and turns in speech (Orchard & Stern, 1991; Cummins, 2012). Interestingly, blinks are often synchronized, or temporally coordinated, between speakers (Nakano & Kitazawa, 2010); however, individuals with autism spectrum disorders fail to show such synchrony, perhaps indicating that temporal coordination is at the root of social communication impairments (Nakano et al., 2011). Further, eyeblinks can be read as indicators of a variety of psychological and clinical states (Oh et al., 2012). Mirroring attention/arousal and modulated by dopamine (DA), eyeblinks reveal information about sleepiness, attentiveness, and the difficulty of a task (Ponder & Kennedy, 1927; Schleicher et al., 2008). Blink rate (BR) is directly proportional to DA levels, with Parkinson’s patients (low DA/low BR) and schizophrenics (high DA/high BR) at opposite ends of the dopamine/blinking spectrum (Barbato et al., 2012; Colzato et al., 2009; Esteban et al., 2004). Such dopamine-linked disorders typically involve disruptions in timing and/or motor processes, mediated by brainstem structures like the basal ganglia and cerebellum. Eyeblink analysis is an established neuropsychological tool – used to evaluate dopamine function, cognitive load, and both temporal and social coordination. Such analysis can reasonably be expected to be relevant in the scientific study of music.\nPresent Aims: Because eyeblinks have clear social and clinical implications, the goal of this thesis is to examine the role eyeblinks might play in music cognition and to discuss the results of a sightreading experiment conducted at the Conservatorium van Amsterdam. Results of the experiment suggest that, in general, eyeblinks are suppressed while sight-reading; however, blinks that do occur tend to be at musical phrase transitions or at other structurally relevant musical instances. While there is variability across participants in average number of blinks per reading, there is an incredible amount of consistency on an individual basis in average number of blinks, as well as musical/temporal location of blinks across readings. Overall, it seems that eyeblinks provide insights into an individual’s chunking of musical information and are likely to be a particularly useful evaluative tool in pedagogical and/or therapeutic settings, in addition to experimental ones.}\n}\n\n\n\n
\n
\n\n\n
\n Eyeblinks have yet to attract significant attention in music cognition research, though they have been studied extensively in other domains. Rather than an artifact to be removed in eye tracking or EEG data, eyeblinks, and their connection with musical behaviors, warrant proper exploration. Background: Eyeblinks tend to occur at structurally salient breaks during both reading and speech; they are likely to occur at the ends of sentences and paragraphs in a text, or at pauses and turns in speech (Orchard & Stern, 1991; Cummins, 2012). Interestingly, blinks are often synchronized, or temporally coordinated, between speakers (Nakano & Kitazawa, 2010); however, individuals with autism spectrum disorders fail to show such synchrony, perhaps indicating that temporal coordination is at the root of social communication impairments (Nakano et al., 2011). Further, eyeblinks can be read as indicators of a variety of psychological and clinical states (Oh et al., 2012). Mirroring attention/arousal and modulated by dopamine (DA), eyeblinks reveal information about sleepiness, attentiveness, and the difficulty of a task (Ponder & Kennedy, 1927; Schleicher et al., 2008). Blink rate (BR) is directly proportional to DA levels, with Parkinson’s patients (low DA/low BR) and schizophrenics (high DA/high BR) at opposite ends of the dopamine/blinking spectrum (Barbato et al., 2012; Colzato et al., 2009; Esteban et al., 2004). Such dopamine-linked disorders typically involve disruptions in timing and/or motor processes, mediated by brainstem structures like the basal ganglia and cerebellum. Eyeblink analysis is an established neuropsychological tool – used to evaluate dopamine function, cognitive load, and both temporal and social coordination. Such analysis can reasonably be expected to be relevant in the scientific study of music. Present Aims: Because eyeblinks have clear social and clinical implications, the goal of this thesis is to examine the role eyeblinks might play in music cognition and to discuss the results of a sightreading experiment conducted at the Conservatorium van Amsterdam. Results of the experiment suggest that, in general, eyeblinks are suppressed while sight-reading; however, blinks that do occur tend to be at musical phrase transitions or at other structurally relevant musical instances. While there is variability across participants in average number of blinks per reading, there is an incredible amount of consistency on an individual basis in average number of blinks, as well as musical/temporal location of blinks across readings. Overall, it seems that eyeblinks provide insights into an individual’s chunking of musical information and are likely to be a particularly useful evaluative tool in pedagogical and/or therapeutic settings, in addition to experimental ones.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Lauren K. Fink, percussion, Thursday, April 4, 2013.\n \n \n\n\n \n Fink, L. K\n\n\n \n\n\n\n CCM Programs. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Lauren link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2013lauren,\n  title={Lauren K. Fink, percussion, Thursday, April 4, 2013},\n  author={Fink, Lauren K},\n  journal={CCM Programs},\n  year={2013},\n  url_Link={https://lkfink.github.io/assets/Fink_RecitalProgramNotes.pdf},\n  publisher={University of Cincinnati. College-Conservatory of Music}\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);