Word learning tasks as a window into the for presuppositions.

Nat Lang Semant

Laboratoire de Sciences Cognitives et Psycholinguistique (EHESS, CNRS), Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, 29 Rue d'Ulm, Paris, 75005 France.

Published: October 2024

In this paper, we show that native speakers spontaneously divide the complex meaning of a new word into a presuppositional component and an assertive component. These results argue for the existence of a productive triggering algorithm for presuppositions, one that is not based on alternative lexical items nor on contextual salience. On a methodological level, the proposed learning paradigm can be used to test further theories concerned with the interaction of lexical properties and conceptual biases.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11538235PMC
http://dx.doi.org/10.1007/s11050-024-09224-5DOI Listing

Publication Analysis

Top Keywords

word learning
4
learning tasks
4
tasks window
4
window presuppositions
4
presuppositions paper
4
paper native
4
native speakers
4
speakers spontaneously
4
spontaneously divide
4
divide complex
4

Similar Publications

A corpus of Chinese word segmentation agreement.

Behav Res Methods

December 2024

Department of Education Studies, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong.

The absence of explicit word boundaries is a distinctive characteristic of Chinese script, setting it apart from most alphabetic scripts, leading to word boundary disagreement among readers. Previous studies have examined how this feature may influence reading performance. However, further investigations are required to generate more ecologically valid and generalizable findings.

View Article and Find Full Text PDF

Moving beyond word frequency based on tally counting: AI-generated familiarity estimates of words and phrases are an interesting additional index of language knowledge.

Behav Res Methods

December 2024

ETSI de Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense, 30, 28040, Madrid, Spain.

This study investigates the potential of large language models (LLMs) to estimate the familiarity of words and multi-word expressions (MWEs). We validated LLM estimates for isolated words using existing human familiarity ratings and found strong correlations. LLM familiarity estimates performed even better in predicting lexical decision and naming performance in megastudies than the best available word frequency measures.

View Article and Find Full Text PDF

Despite being largely spoken and studied by language and cognitive scientists, Italian lacks large resources of language processing data. The Italian Crowdsourcing Project (ICP) is a dataset of word recognition times and accuracy including responses to 130,465 words, which makes it the largest dataset of its kind item-wise. The data were collected in an online word knowledge task in which over 156,000 native speakers of Italian took part.

View Article and Find Full Text PDF

Objective: To detect and classify features of stigmatizing and biased language in intensive care electronic health records (EHRs) using natural language processing techniques.

Materials And Methods: We first created a lexicon and regular expression lists from literature-driven stem words for linguistic features of stigmatizing patient labels, doubt markers, and scare quotes within EHRs. The lexicon was further extended using Word2Vec and GPT 3.

View Article and Find Full Text PDF

Purpose: Information and communication technologies are crucial for social and professional integration, but access to technology can be difficult for people with physical impairments. Text entry can be slow and tiring. We developed a free and open-source module called for use with AAC (augmentative/alternative communication) software in French language.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!