Speech comprehension involves processing at different levels of analysis, such as acoustic, phonetic, and lexical. We investigated neural responses to manipulating the difficulty of processing at two of these levels. Twelve subjects underwent positron emission tomographic scanning while making decisions based upon the semantic relatedness between heard nouns. We manipulated perceptual difficulty by presenting either clear or acoustically degraded speech, and semantic difficulty by varying the degree of semantic relatedness between words. Increasing perceptual difficulty was associated with greater activation of the left superior temporal gyrus, an auditory-perceptual region involved in speech processing. Increasing semantic difficulty was associated with reduced activity in both superior temporal gyri and increased activity within the left angular gyrus, a heteromodal region involved in accessing word meaning. Comparing across all the conditions, we also observed increased activation within the left inferior prefrontal cortex as the complexity of language processing increased. These results demonstrate a flexible system for language processing, where activity within distinct parts of the network is modulated as processing demands change.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870623PMC
http://dx.doi.org/10.1002/hbm.20871DOI Listing

Publication Analysis

Top Keywords

language processing
12
complexity language
8
processing levels
8
semantic relatedness
8
perceptual difficulty
8
semantic difficulty
8
difficulty associated
8
activation left
8
superior temporal
8
region involved
8

Similar Publications

Learning the language of antibody hypervariability.

Proc Natl Acad Sci U S A

January 2025

Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139.

Protein language models (PLMs) have demonstrated impressive success in modeling proteins. However, general-purpose "foundational" PLMs have limited performance in modeling antibodies due to the latter's hypervariable regions, which do not conform to the evolutionary conservation principles that such models rely on. In this study, we propose a transfer learning framework called Antibody Mutagenesis-Augmented Processing (AbMAP), which fine-tunes foundational models for antibody-sequence inputs by supervising on antibody structure and binding specificity examples.

View Article and Find Full Text PDF

The role of chromatin state in intron retention: A case study in leveraging large scale deep learning models.

PLoS Comput Biol

January 2025

Department of Computer Science, Colorado State University, Fort Collins, Colorado, United States of America.

Complex deep learning models trained on very large datasets have become key enabling tools for current research in natural language processing and computer vision. By providing pre-trained models that can be fine-tuned for specific applications, they enable researchers to create accurate models with minimal effort and computational resources. Large scale genomics deep learning models come in two flavors: the first are large language models of DNA sequences trained in a self-supervised fashion, similar to the corresponding natural language models; the second are supervised learning models that leverage large scale genomics datasets from ENCODE and other sources.

View Article and Find Full Text PDF

Semantical text understanding holds significant importance in natural language processing (NLP). Numerous datasets, such as Quora Question Pairs (QQP), have been devised for this purpose. In our previous study, we developed a Siamese Convolutional Neural Network (S-CNN) that achieved an F1 score of 82.

View Article and Find Full Text PDF

The advantages of lexicon-based sentiment analysis in an age of machine learning.

PLoS One

January 2025

Department of Political Science, Middlebury College, Middlebury, Vermont, United States of America.

Assessing whether texts are positive or negative-sentiment analysis-has wide-ranging applications across many disciplines. Automated approaches make it possible to code near unlimited quantities of texts rapidly, replicably, and with high accuracy. Compared to machine learning and large language model (LLM) approaches, lexicon-based methods may sacrifice some in performance, but in exchange they provide generalizability and domain independence, while crucially offering the possibility of identifying gradations in sentiment.

View Article and Find Full Text PDF

Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!