Compositionality in a Parallel Architecture for Language Processing.

Cogn Sci

Language Acquisition and Language Processing Lab, Department of Language and Literature, Faculty of Humanities, Norwegian University of Science and Technology.

Published: May 2021

Compositionality has been a central concept in linguistics and philosophy for decades, and it is increasingly prominent in many other areas of cognitive science. Its status, however, remains contentious. Here, I reassess the nature and scope of the principle of compositionality (Partee, 1995) from the perspective of psycholinguistics and cognitive neuroscience. First, I review classic arguments for compositionality and conclude that they fail to establish compositionality as a property of human language. Next, I state a new competence argument, acknowledging the fact that any competent user of a language L can assign to most expressions in L at least one meaning which is a function only of the meanings of the expression's parts and of its syntactic structure. I then discuss selected results from cognitive neuroscience, indicating that the human brain possesses the processing capacities presupposed by the competence argument. Finally, I outline a language processing architecture consistent with the neuroscience results, where semantic representations may be generated by a syntax-driven stream and by an "asyntactic" processing stream, jointly or independently. Compositionality is viewed as a constraint on computation in the former stream only.

Download full-text PDF

Source
http://dx.doi.org/10.1111/cogs.12949DOI Listing

Publication Analysis

Top Keywords

language processing
8
cognitive neuroscience
8
competence argument
8
compositionality
6
compositionality parallel
4
parallel architecture
4
language
4
architecture language
4
processing
4
processing compositionality
4

Similar Publications

With breakthroughs in Natural Language Processing and Artificial Intelligence (AI), the usage of Large Language Models (LLMs) in academic research has increased tremendously. Models such as Generative Pre-trained Transformer (GPT) are used by researchers in literature review, abstract screening, and manuscript drafting. However, these models also present the attendant challenge of providing ethically questionable scientific information.

View Article and Find Full Text PDF

Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures.

View Article and Find Full Text PDF

Malate initiates a proton-sensing pathway essential for pH regulation of inflammation.

Signal Transduct Target Ther

December 2024

Department of Orthopedic Surgery/Sports Medicine Center, Southwest Hospital, Army Medical University, Chongqing, 400038, China.

Metabolites can double as a signaling modality that initiates physiological adaptations. Metabolism, a chemical language encoding biological information, has been recognized as a powerful principle directing inflammatory responses. Cytosolic pH is a regulator of inflammatory response in macrophages.

View Article and Find Full Text PDF

Modelling the dynamics of biological processes is ubiquitous across the ecological and evolutionary disciplines. However, the increasing complexity of these models poses a challenge to the dissemination of model-derived results. Often only a small subset of model results are made available to the scientific community, with further exploration of the parameter space relying on local deployment of code supplied by the authors.

View Article and Find Full Text PDF

Objective: To characterize the public conversations around long COVID, as expressed through X (formerly Twitter) posts from May 2020 to April 2023.

Methods: Using X as the data source, we extracted tweets containing #long-covid, #long_covid, or "long covid," posted from May 2020 to April 2023. We then conducted an unsupervised deep learning analysis using Bidirectional Encoder Representations from Transformers (BERT).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!