Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such "earliness" is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word production. Current psycholinguistic models make divergent predictions on how a preverbal message is mapped onto articulatory output during the language planning. Serial models predict a cascading sequence of hierarchically organized neural computations from conceptualization to articulation. In contrast, parallel models posit early simultaneous activation of multiple conceptual, phonological, and articulatory information in the language system. Here we asked whether such earliness is a distinctive property of the neural dynamics of word production. The combination of the millisecond precision of MEG with multivariate pattern analyses revealed subsequent onset times for the neural events supporting semantic and phonological/phonetic operations, progressing from posterior occipitotemporal to frontal sensor areas. The findings bring new insights for refining current theories of language production.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9302460PMC
http://dx.doi.org/10.1523/JNEUROSCI.1923-21.2022DOI Listing

Publication Analysis

Top Keywords

language production
12
word production
12
time course
8
computations conceptualization
8
conceptualization articulation
8
neural events
8
phonological articulatory
8
distinctive property
8
object categories
8
word length
8

Similar Publications

Background: Selective androgen receptor modulators (SARMs) are small-molecule compounds that exert agonist and antagonist effects on androgen receptors in a tissue-specific fashion. Because of their performance-enhancing implications, SARMs are increasingly abused by athletes. To date, SARMs have no Food and Drug Administration approved use, and recent case reports associate the use of SARMs with deleterious effects such as drug-induced liver injury, myocarditis, and tendon rupture.

View Article and Find Full Text PDF

Mobile Ad Hoc Networks (MANETs) are increasingly replacing conventional communication systems due to their decentralized and dynamic nature. However, their wireless architecture makes them highly vulnerable to flooding attacks, which can disrupt communication, deplete energy resources, and degrade network performance. This study presents a novel hybrid deep learning approach integrating Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures to effectively detect and mitigate flooding attacks in MANETs.

View Article and Find Full Text PDF

This survey explores the transformative impact of foundation models (FMs) in artificial intelligence, focusing on their integration with federated learning (FL) in biomedical research. Foundation models such as ChatGPT, LLaMa, and CLIP, which are trained on vast datasets through methods including unsupervised pretraining, self-supervised learning, instructed fine-tuning, and reinforcement learning from human feedback, represent significant advancements in machine learning. These models, with their ability to generate coherent text and realistic images, are crucial for biomedical applications that require processing diverse data forms such as clinical reports, diagnostic images, and multimodal patient interactions.

View Article and Find Full Text PDF

Background: Despite their ubiquity across sub-Saharan Africa, private pharmacies are underutilized for HIV service delivery beyond the sale of HIV self-test kits. To understand what uptake of HIV prevention and treatment services might look like if private pharmacies offered clients free HIV self-testing and referral to clinic-based HIV services, we conducted a pilot study in Kenya.

Methods: At 20 private pharmacies in Kisumu County, Kenya, pharmacy clients (≥ 18 years) purchasing sexual health-related products (e.

View Article and Find Full Text PDF

Although a large body of work has explored the mechanisms underlying metaphor comprehension, less research has focused on spontaneous metaphor production. Previous research suggests that reasoning about analogies can induce a relational mindset, which causes a greater focus on underlying abstract similarities. We explored how inducing a relational mindset may increase the tendency to use metaphors to describe topics.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!