We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105-143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2777195 | PMC |
http://dx.doi.org/10.1007/s11571-009-9094-0 | DOI Listing |
Trends Cogn Sci
October 2024
Institute of Philosophy, School of Advanced Study, University of London, London, UK; Faculty of Philosophy, University of Oxford, Oxford, UK. Electronic address:
The quality space hypothesis about conscious experience proposes that conscious sensory states are experienced in relation to other possible sensory states. For instance, the colour red is experienced as being more like orange, and less like green or blue. Recent empirical findings suggest that subjective similarity space can be explained in terms of similarities in neural activation patterns.
View Article and Find Full Text PDFCogn Neurodyn
April 2024
Institute for Neural Computation, Ruhr-University Bochum, Bochum, Germany.
Because cognitive competences emerge in evolution and development from the sensory-motor domain, we seek a neural process account for higher cognition in which all representations are necessarily grounded in perception and action. The challenge is to understand how hallmarks of higher cognition, productivity, systematicity, and compositionality, may emerge from such a bottom-up approach. To address this challenge, we present key ideas from Dynamic Field Theory which postulates that neural populations are organized by recurrent connectivity to create stable localist representations.
View Article and Find Full Text PDFSci Rep
January 2023
Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
Despite great strides in both machine learning and neuroscience, we do not know how the human brain solves problems in the general sense. We approach this question by drawing on the framework of engineering control theory. We demonstrate a computational neural model with only localist learning laws that is able to find solutions to arbitrary problems.
View Article and Find Full Text PDFNetw Neurosci
October 2022
Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany.
Recently, neuroscience has seen a shift from localist approaches to network-wide investigations of brain function. Neurophysiological signals across different spatial and temporal scales provide insight into neural communication. However, additional methodological considerations arise when investigating network-wide brain dynamics rather than local effects.
View Article and Find Full Text PDFNat Comput Sci
September 2021
Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!