Moving hand reveals dynamics of thought. Magnuson, J., S.
Moving hand reveals dynamics of thought [pdf]Paper  abstract   bibtex   
P erception, cognition, and action occur over time. An organism must continuously and rapidly integrate sensory data with prior knowledge and potential actions at multi-ple timescales. This makes time of central importance in cognitive science. Crucial questions hinge on time, from the class of systems that may underlie cognition to debates about constraints on the func-tional organization of the brain and theo-retical disputes in specialized domains. Often, the fine-grained time-course pre-dictions that distinguish theories exceed the temporal resolution of available mea-sures. In this issue of PNAS, Spivey et al. (1) introduce a method (''mouse track-ing'') that provides a continuous measure of underlying perception and cognition in online language processing, promising badly needed leverage for addressing the-oretical impasses, narrow and broad. I will describe examples of theoretical debates that hinge on time course, the difficulties in assessing time, and how the strengths and limitations of the new method com-plement current techniques for estimating time course. I conclude with a discussion of the potential of the new method to ex-tend the tools and implications of dynami-cal systems theory (DST) to higher-level cognition. The Time-Course Quandary Precious little of perception and cognition can be observed directly and instead must be inferred from relationships between input and behavior, often from single postperceptual responses. A typical word-recognition paradigm is lexical decision. A subject sees or hears words and nonwords and hits keys indicating whether the stim-ulus was a word. The more frequently a word is used, the more quickly subjects respond yes; however, the more words that sound similar to it, the slower the re-sponse. So reaction times tell us a com-plex process of activation and competition underlies word recognition but indicate little beyond the approximate number of activated words and some of their charac-teristics. Theories make conflicting predic-tions about precisely which words are activated, and how strongly each competes over time as a word is heard (2–4). The temporal resolution needed to test these conflicting predictions exceeds by far that of methods like lexical decision. A similar stalemate occurred in sen-tence processing. Theories agree that as each word is recognized, its grammatical category constrains the assembly of syn-tactic structures that in turn constrain semantics. The central theoretical de-bate is between syntax-first (5) and con-straint-based (6, 7) theories. Syntax-first parsers initially construct the simplest syntactic structure without consideration of semantics and revise it later if syntax and semantics cannot be integrated. In constraint-based theories, semantics con-tinuously constrain syntax. Distinguishing the theories requires testing how quickly semantics of specific words influence syn-tactic parsing. Given the temporal resolu-tion of conventional techniques, cases where the semantics of individual words appear to constrain syntactic parsing (sup-porting constraint-based views) can be accommodated by extending syntax-first to generate multiple possible parses in parallel, ordered by simplicity, with se-mantic evaluation lagging a few hundred milliseconds behind (8). Resolving this debate requires measures of precisely how early semantics influences syntax. Tanenhaus et al. (9) provided a dra-matic step forward when they estimated time course by tracking eye movements as subjects followed spoken instructions to perform visually guided movements. Fixa-tions lagged Ϸ200 ms behind disambiguat-ing acoustic information, which is remark-ably close, because saccades require 150 ms to plan and launch (10). Their most striking result supported constraint-based theories. An ambiguous instruction that would require a syntax-first parser to

Downloads: 0