Extraction of linguistic information from successive words during reading: evidence for spatially distributed lexical processing.

J Exp Psychol Hum Percept Perform

Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.

Published: June 2013

Two experiments examined whether word recognition progressed from one word to the next during reading, as maintained by sequential attention shift models such as the E-Z Reader model. The boundary technique was used to control the visibility of to-be-identified short target words, so that they were either previewed in the parafovea or masked. The eyes skipped a masked target on more than a quarter of the trials, and the following fixation must have been mislocated, if word recognition and saccade targeting progressed from one word to the next. Readers responded to the skipping parafoveally masked target words with relatively long viewing duration for the following posttarget word or with corrective saccades that returned the eyes from the posttarget word to the target. Experiment 2 manipulated the time-line of posttarget onset after target skipping, so that the posttarget word was either visible immediately upon fixation or after a short delay. The delay influenced posttarget viewing even when attention should have been focused at the target location according to E-Z Reader 10 simulations. These findings favor theoretical conceptions according to which lexical processing can encompass more than one word at a time.

Download full-text PDF

Source
http://dx.doi.org/10.1037/a0030242DOI Listing

Publication Analysis

Top Keywords

posttarget word
12
lexical processing
8
word
8
word recognition
8
progressed word
8
e-z reader
8
masked target
8
target
6
posttarget
5
extraction linguistic
4

Similar Publications

Previous research suggests that unexpected (deviant) sounds negatively affect reading performance by inhibiting saccadic planning, which models of reading agree takes place simultaneous to parafoveal processing. This study examined the effect of deviant sounds on foveal and parafoveal processing. Participants read single sentences in quiet, standard (repeated sounds), or deviant sound conditions (a new sound within a repeated sound sequence).

View Article and Find Full Text PDF

The eyes speak when the mouth cannot: Using eye movements to interpret omissions in primary progressive aphasia.

Neuropsychologia

June 2023

Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Psychology, Cleveland State University, Cleveland, OH, 44115, USA. Electronic address:

Though it may seem simple, object naming is a complex multistage process that can be impaired by lesions at various sites of the language network. Individuals with neurodegenerative disorders of language, known as primary progressive aphasias (PPA), have difficulty with naming objects, and instead frequently say "I don't know" or fail to give a vocal response at all, known as an omission. Whereas other types of naming errors (paraphasias) give clues as to which aspects of the language network have been compromised, the mechanisms underlying omissions remain largely unknown.

View Article and Find Full Text PDF

Inefficient speech-motor control affects predictive speech comprehension: atypical electrophysiological correlates in stuttering.

Cereb Cortex

May 2023

Dipartimento di Psicologia dello Sviluppo e della Socializzazione (DPSS), University of Padova, Via Venezia 8, Padova (PD) 35131, Italy.

Listeners predict upcoming information during language comprehension. However, how this ability is implemented is still largely unknown. Here, we tested the hypothesis proposing that language production mechanisms have a role in prediction.

View Article and Find Full Text PDF

Can several words be read in parallel, and if so, how is information about word order encoded under such circumstances? Here we focused on the bottom-up mechanisms involved in word-order encoding under the hypothesis of parallel word processing. We recorded EEG while participants performed a visual same-different matching task with sequences of five words (reference sequence followed by a target sequence each presented for 400 ms). The reference sequence could be grammatically correct or an ungrammatical scrambling of the same words (e.

View Article and Find Full Text PDF

Pre- and post-target cortical processes predict speech-in-noise performance.

Neuroimage

March 2021

Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA. Electronic address:

Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!