Favor referential representations.

Brain Lang

Department of Linguistics, University of Massachusetts, USA.

Published: June 1995

Avrutin and Hickok (1993) argue that agrammatic patients have the ability to represent nonreferential or "government" chains ("who ... e") but not referential or "binding" chains ("which girl ... e"). By contrast, we propose the "referential representation hypothesis," which suggests that agrammatics attempt to cope with their well-known capacity limitations by favoring referential or content-based representations. This predicts that agrammatic patients' performance should degrade noticeably as task demands increase, and referential demands should take priority over computational ones. In a semantic task, referential phrases should lead to better or more accurate performances. In syntactic tasks, the availability of a referential or content-based representation will interfere with the development of a syntactic representation, resulting in worse syntactic performance on the referential phrases than on nonreferential ones. This predicts that agrammatic patients should incorrectly accept (resumptive) pronoun sentences with a referential wh-phrase because the pronouns will find the semantic or discourse referent of the referential wh-phrase and take it as an antecedent for the pronoun. However, they should reject a (resumptive) pronoun in a sentence with the nonreferential question constituent "who" or "what." "Who" and "what" will remain in syntactic form, since they have only grammatical content and therefore will have only a "nonreferential" syntactic representation. Consequently, they cannot serve as the antecedent of the pronoun. These predictions were largely confirmed by the results of a grammaticality judgement study. Agrammatics performed well on questions with pragmatic biases but failed to distinguish reliably between grammatical and ungrammatical questions where pragmatic biases were neutralized. They assigned especially low ratings to object gap sentences with referential wh-constituents, as predicted. They assigned relatively high ratings to ungrammatical subject pronoun sentences with either type of wh- constituent. The agrammatics accepted ungrammatical reflexive sentences even though syntactic number and gender features alone could have been used to correctly judge the sentences. We attribute this, too, to the unavailability of a reliable syntactic representation of those phrases with referential or extragrammatical semantic content.

Download full-text PDF

Source
http://dx.doi.org/10.1006/brln.1995.1031DOI Listing

Publication Analysis

Top Keywords

syntactic representation
12
referential
10
agrammatic patients
8
referential content-based
8
predicts agrammatic
8
referential phrases
8
resumptive pronoun
8
pronoun sentences
8
sentences referential
8
referential wh-phrase
8

Similar Publications

While deep learning techniques have been extensively employed in malware detection, there is a notable challenge in effectively embedding malware features. Current neural network methods primarily capture superficial characteristics, lacking in-depth semantic exploration of functions and failing to preserve structural information at the file level. Motivated by the aforementioned challenges, this paper introduces MalHAPGNN, a novel framework for malware detection that leverages a hierarchical attention pooling graph neural network based on enhanced call graphs.

View Article and Find Full Text PDF

Grammar-constrained decoding for structured information extraction with fine-tuned generative models applied to clinical trial abstracts.

Front Artif Intell

January 2025

Center for Cognitive Interaction Technology (CITEC), Technical Faculty, Bielefeld University, Bielefeld, Germany.

Background: In the field of structured information extraction, there are typically semantic and syntactic constraints on the output of information extraction (IE) systems. These constraints, however, can typically not be guaranteed using standard (fine-tuned) encoder-decoder architectures. This has led to the development of constrained decoding approaches which allow, e.

View Article and Find Full Text PDF

Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.

View Article and Find Full Text PDF

Hearing Pronouns Primes Speakers to Use Pronouns.

Open Mind (Camb)

January 2025

Department of Psychology and Neuroscience, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.

Speaking requires frequent decisions about how to refer, for example whether to use a pronoun (she) or a name (Ana). It is well known that this choice is guided by the discourse context, but little is known about the representations that are activated. We use priming to test whether this choice can be facilitated through recent exposure, and if so, what representations are activated.

View Article and Find Full Text PDF

During the Covid-19 pandemic, the widespread use of social media platforms has facilitated the dissemination of information, fake news, and propaganda, serving as a vital source of self-reported symptoms related to Covid-19. Existing graph-based models, such as Graph Neural Networks (GNNs), have achieved notable success in Natural Language Processing (NLP). However, utilizing GNN-based models for propaganda detection remains challenging because of the challenges related to mining distinct word interactions and storing nonconsecutive and broad contextual data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!