Fluent reading is characterized by speed and accuracy in the decoding and comprehension of connected text. Although a variety of measures are available for the assessment of reading skills most tests do not evaluate rate of text recognition as reflected in fluent reading. Here we evaluate FastaReada, a customized computer-generated task that was developed to address some of the limitations of currently available measures of reading skills. FastaReada provides a rapid assessment of reading fluency quantified as words read per minute for connected, meaningful text. To test the criterion validity of FastaReada, 124 mainstream school children with typical sensory, mental and motor development were assessed. Performance on FastaReada was correlated with the established Neale Analysis of Reading Ability (NARA) measures of text reading accuracy, rate and comprehension, and common single word measures of pseudoword (non-word) reading, phonetic decoding, phonological awareness (PA) and mode of word decoding (i.e., visual or eidetic versus auditory or phonetic). The results demonstrated strong positive correlations between FastaReada performance and NARA reading rate (r = 0.75), accuracy (r = 0.83) and comprehension (r = 0.63) scores providing evidence for criterion-related validity. Additional evidence for criterion validity was demonstrated through strong positive correlations between FastaReada and both single word eidetic (r = 0.81) and phonetic decoding skills (r = 0.68). The results also demonstrated FastaReada to be a stronger predictor of eidetic decoding than the NARA rate measure, with FastaReada predicting 14.4% of the variance compared to 2.6% predicted by NARA rate. FastaReada was therefore deemed to be a valid tool for educators, clinicians, and research related assessment of reading accuracy and rate. As expected, analysis with hierarchical regressions also highlighted the closer relationship of fluent reading to rapid visual word recognition than to phonological-based skills. Eidetic decoding was the strongest predictor of FastaReada performance (16.8%) followed by phonetic decoding skill (1.7%). PA did not make a unique contribution after eidetic decoding and phonetic decoding skills were accounted for.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4621297PMC
http://dx.doi.org/10.3389/fpsyg.2015.01634DOI Listing

Publication Analysis

Top Keywords

phonetic decoding
16
reading
12
fluent reading
12
assessment reading
12
eidetic decoding
12
fastareada
11
decoding
9
reading fluency
8
reading skills
8
criterion validity
8

Similar Publications

Acoustic Exaggeration Enhances Speech Discrimination in Young Autistic Children.

Autism Res

December 2024

Psychiatry and Addictology Department, CIUSSS-NIM Research Center, University of Montreal, Montreal, Quebec, Canada.

Child-directed speech (CDS), which amplifies acoustic and social features of speech during interactions with young children, promotes typical phonetic and language development. In autism, both behavioral and brain data indicate reduced sensitivity to human speech, which predicts absent, decreased, or atypical benefits of exaggerated speech signals such as CDS. This study investigates the impact of exaggerated fundamental frequency (F0) and voice-onset time on the neural processing of speech sounds in 22 Chinese-speaking autistic children aged 2-7 years old with a history of speech delays, compared with 25 typically developing (TD) peers.

View Article and Find Full Text PDF

Mapping the spectrotemporal regions influencing perception of French stop consonants in noise.

Sci Rep

November 2024

Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005, Paris, France.

Article Synopsis
  • This study investigates how listeners decode French stop consonants amid background noise, using a reverse-correlation approach for detailed analysis.
  • Thirty-two participants completed a discrimination task, allowing researchers to map the specific acoustic cues they relied on, such as formant transitions and voicing cues.
  • The findings highlight the complexity of speech perception, revealing that individuals utilize a variety of cues with significant differences in how each person processes sounds.
View Article and Find Full Text PDF

Continuous and discrete decoding of overt speech with scalp electroencephalography (EEG).

J Neural Eng

October 2024

Electrical and Computer Engineering, University of Houston, N308 Engineering Building I, Houston, Texas, 77204-4005, UNITED STATES.

Article Synopsis
  • * This research investigates the use of non-invasive EEG to develop speech Brain-Computer Interfaces (BCIs) that decode speech features directly, aiming for a more natural communication method.
  • * Deep learning models, such as CNNs and RNNs, were tested for speech decoding tasks, showing significant success in distinguishing both discrete and continuous speech elements, while also indicating the importance of specific EEG frequency bands for performance.
View Article and Find Full Text PDF

A model synthesizing average frequency components from select sentences in an electromagnetic articulography database has been crafted. This revealed the dual roles of the tongue: its dorsum acts like a carrier wave, and the tip acts as a modulation signal within the articulatory realm. This model illuminates anticipatory coarticulation's subtleties during speech planning.

View Article and Find Full Text PDF

Objective: Speech brain-computer interfaces (speech BCIs), which convert brain signals into spoken words or sentences, have demonstrated great potential for high-performance BCI communication. Phonemes are the basic pronunciation units. For monosyllabic languages such as Chinese Mandarin, where a word usually contains less than three phonemes, accurate decoding of phonemes plays a vital role.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!