Fluent reading is characterized by speed and accuracy in the decoding and comprehension of connected text. Although a variety of measures are available for the assessment of reading skills most tests do not evaluate rate of text recognition as reflected in fluent reading. Here we evaluate FastaReada, a customized computer-generated task that was developed to address some of the limitations of currently available measures of reading skills. FastaReada provides a rapid assessment of reading fluency quantified as words read per minute for connected, meaningful text. To test the criterion validity of FastaReada, 124 mainstream school children with typical sensory, mental and motor development were assessed. Performance on FastaReada was correlated with the established Neale Analysis of Reading Ability (NARA) measures of text reading accuracy, rate and comprehension, and common single word measures of pseudoword (non-word) reading, phonetic decoding, phonological awareness (PA) and mode of word decoding (i.e., visual or eidetic versus auditory or phonetic). The results demonstrated strong positive correlations between FastaReada performance and NARA reading rate (r = 0.75), accuracy (r = 0.83) and comprehension (r = 0.63) scores providing evidence for criterion-related validity. Additional evidence for criterion validity was demonstrated through strong positive correlations between FastaReada and both single word eidetic (r = 0.81) and phonetic decoding skills (r = 0.68). The results also demonstrated FastaReada to be a stronger predictor of eidetic decoding than the NARA rate measure, with FastaReada predicting 14.4% of the variance compared to 2.6% predicted by NARA rate. FastaReada was therefore deemed to be a valid tool for educators, clinicians, and research related assessment of reading accuracy and rate. As expected, analysis with hierarchical regressions also highlighted the closer relationship of fluent reading to rapid visual word recognition than to phonological-based skills. Eidetic decoding was the strongest predictor of FastaReada performance (16.8%) followed by phonetic decoding skill (1.7%). PA did not make a unique contribution after eidetic decoding and phonetic decoding skills were accounted for.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4621297 | PMC |
http://dx.doi.org/10.3389/fpsyg.2015.01634 | DOI Listing |
Autism Res
December 2024
Psychiatry and Addictology Department, CIUSSS-NIM Research Center, University of Montreal, Montreal, Quebec, Canada.
Child-directed speech (CDS), which amplifies acoustic and social features of speech during interactions with young children, promotes typical phonetic and language development. In autism, both behavioral and brain data indicate reduced sensitivity to human speech, which predicts absent, decreased, or atypical benefits of exaggerated speech signals such as CDS. This study investigates the impact of exaggerated fundamental frequency (F0) and voice-onset time on the neural processing of speech sounds in 22 Chinese-speaking autistic children aged 2-7 years old with a history of speech delays, compared with 25 typically developing (TD) peers.
View Article and Find Full Text PDFSci Rep
November 2024
Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005, Paris, France.
J Neural Eng
October 2024
Electrical and Computer Engineering, University of Houston, N308 Engineering Building I, Houston, Texas, 77204-4005, UNITED STATES.
J Acoust Soc Am
October 2024
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China.
A model synthesizing average frequency components from select sentences in an electromagnetic articulography database has been crafted. This revealed the dual roles of the tongue: its dorsum acts like a carrier wave, and the tip acts as a modulation signal within the articulatory realm. This model illuminates anticipatory coarticulation's subtleties during speech planning.
View Article and Find Full Text PDFIEEE Trans Neural Syst Rehabil Eng
September 2024
Objective: Speech brain-computer interfaces (speech BCIs), which convert brain signals into spoken words or sentences, have demonstrated great potential for high-performance BCI communication. Phonemes are the basic pronunciation units. For monosyllabic languages such as Chinese Mandarin, where a word usually contains less than three phonemes, accurate decoding of phonemes plays a vital role.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!