Purpose: Successful sentence production requires lexical encoding and ordering them into a correct syntactic structure. It remains unclear how different processes involved in sentence production are affected by healthy aging. We investigated (a) if and how aging affects lexical encoding and syntactic formulation during sentence production, using auditory lexical priming and eye tracking-while-speaking paradigms and (b) if and how verbal working memory contributes to age-related changes in sentence production.
Methods: Twenty older and 20 younger adults described transitive and dative action pictures following auditory lexical primes, by which the relative ease of encoding the agent or theme nouns (for transitive pictures) and the theme and goal nouns (for dative pictures) was manipulated. The effects of lexical priming on off-line syntactic production and real-time eye fixations to the primed character were measured.
Results: In offline production, older adults showed comparable priming effects to younger adults, using the syntactic structure that allows earlier mention of the primed lexical item in both transitive and dative sentences. However, older adults showed longer lexical priming effects on eye fixations to the primed character during the early stages of sentence planning. Preliminary analysis indicated that reduced verbal working memory may in part account for longer lexical encoding, particularly for older adults.
Conclusion: These findings indicate that syntactic flexibility for formulating different grammatical structures remains largely robust with aging. However, lexical encoding processes are more susceptible to age-related changes, possibly due to changes in verbal working memory.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11381281 | PMC |
http://dx.doi.org/10.3389/fpsyg.2024.1304517 | DOI Listing |
J Neural Eng
December 2024
Trinity College Dublin, College Green, Dublin 2, Dublin, D02 PN40, IRELAND.
Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. Recent work demonstrated that such a predictive process can be probed from neural signals recorded during ecologically-valid speech listening tasks by using linear lagged models, such as the temporal response function.
View Article and Find Full Text PDFTop Cogn Sci
December 2024
Department of Linguistics, University of Massachusetts Amherst.
As they process complex linguistic input, language comprehenders must maintain a mapping between lexical items (e.g., morphemes) and their syntactic position in the sentence.
View Article and Find Full Text PDFEncoding and establishing a new second-language (L2) phonological category is notoriously difficult. This is particularly true for phonological contrasts that do not exist in the learners' native language (L1). Phonological categories that also exist in the L1 do not seem to pose any problems.
View Article and Find Full Text PDFPeerJ Comput Sci
October 2024
Department of Basic, Xi'an Research Institute of High-Tech, Xi'an, Shaanxi, China.
Lexicon Enhanced Bidirectional Encoder Representations from Transformers (LEBERT) has achieved great success in Chinese Named Entity Recognition (NER). LEBERT performs lexical enhancement with a Lexicon Adapter layer, which facilitates deep lexicon knowledge fusion at the lower layers of BERT. However, this method is likely to introduce noise words and does not consider the possible conflicts between words when fusing lexicon information.
View Article and Find Full Text PDFPeerJ Comput Sci
November 2024
Department of Informatics, Constantine the Philosopher University in Nitra, Nitra, Slovak Republic.
This study introduces a new approach to text tokenization, SlovaK Morphological Tokenizer (SKMT), which integrates the morphology of the Slovak language into the training process using the Byte-Pair Encoding (BPE) algorithm. Unlike conventional tokenizers, SKMT focuses on preserving the integrity of word roots in individual tokens, crucial for maintaining lexical meaning. The methodology involves segmenting and extracting word roots from morphological dictionaries and databases, followed by preprocessing and training SKMT alongside a traditional BPE tokenizer.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!