Two experiments examined whether word recognition progressed from one word to the next during reading, as maintained by sequential attention shift models such as the E-Z Reader model. The boundary technique was used to control the visibility of to-be-identified short target words, so that they were either previewed in the parafovea or masked. The eyes skipped a masked target on more than a quarter of the trials, and the following fixation must have been mislocated, if word recognition and saccade targeting progressed from one word to the next. Readers responded to the skipping parafoveally masked target words with relatively long viewing duration for the following posttarget word or with corrective saccades that returned the eyes from the posttarget word to the target. Experiment 2 manipulated the time-line of posttarget onset after target skipping, so that the posttarget word was either visible immediately upon fixation or after a short delay. The delay influenced posttarget viewing even when attention should have been focused at the target location according to E-Z Reader 10 simulations. These findings favor theoretical conceptions according to which lexical processing can encompass more than one word at a time.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0030242 | DOI Listing |
Q J Exp Psychol (Hove)
August 2024
Department of Psychology, Bournemouth University, Poole, UK.
Previous research suggests that unexpected (deviant) sounds negatively affect reading performance by inhibiting saccadic planning, which models of reading agree takes place simultaneous to parafoveal processing. This study examined the effect of deviant sounds on foveal and parafoveal processing. Participants read single sentences in quiet, standard (repeated sounds), or deviant sound conditions (a new sound within a repeated sound sequence).
View Article and Find Full Text PDFNeuropsychologia
June 2023
Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Psychology, Cleveland State University, Cleveland, OH, 44115, USA. Electronic address:
Though it may seem simple, object naming is a complex multistage process that can be impaired by lesions at various sites of the language network. Individuals with neurodegenerative disorders of language, known as primary progressive aphasias (PPA), have difficulty with naming objects, and instead frequently say "I don't know" or fail to give a vocal response at all, known as an omission. Whereas other types of naming errors (paraphasias) give clues as to which aspects of the language network have been compromised, the mechanisms underlying omissions remain largely unknown.
View Article and Find Full Text PDFCereb Cortex
May 2023
Dipartimento di Psicologia dello Sviluppo e della Socializzazione (DPSS), University of Padova, Via Venezia 8, Padova (PD) 35131, Italy.
Listeners predict upcoming information during language comprehension. However, how this ability is implemented is still largely unknown. Here, we tested the hypothesis proposing that language production mechanisms have a role in prediction.
View Article and Find Full Text PDFNeuropsychologia
March 2021
Laboratoire de Psychologie Cognitive, CNRS & Aix-Marseille University, Marseille, France.
Can several words be read in parallel, and if so, how is information about word order encoded under such circumstances? Here we focused on the bottom-up mechanisms involved in word-order encoding under the hypothesis of parallel word processing. We recorded EEG while participants performed a visual same-different matching task with sequences of five words (reference sequence followed by a target sequence each presented for 400 ms). The reference sequence could be grammatically correct or an ungrammatical scrambling of the same words (e.
View Article and Find Full Text PDFNeuroimage
March 2021
Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA. Electronic address:
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!