Does saying a novel word help to recognize it later? Previous research on the effect of production on this aspect of word learning is inconclusive, as both facilitatory and detrimental effects of production are reported. In a set of three experiments, we sought to reconcile the seemingly contrasting findings by disentangling the production from other effects. In Experiment 1, participants learned eight new words and their visual referents. On each trial, participants heard a novel word twice: either (a) by hearing the same speaker produce it twice (Perception-Only condition) or (b) by first hearing the speaker once and then producing it themselves (Production condition). At test, participants saw two pictures while hearing a novel word and were asked to choose its correct referent. Experiment 2 was identical to Experiment 1, except that in the Perception-Only condition each word was spoken by 2 different speakers (equalizing talker variability between conditions). Experiment 3 was identical to Experiment 2, but at test words were spoken by a novel speaker to assess generalizability of the effect. Accuracy, reaction time, and eye-movements to the target image were collected. Production had a facilitatory effect during early stages of learning (after short training), but its effect became detrimental after additional training. The results help to reconcile conflicting findings regarding the role of production on word learning. This work is relevant to a wide range of research on human learning in showing that the same factor may play a different role at different stages of learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/xlm0001129 | DOI Listing |
BMC Med Inform Decis Mak
January 2025
Institute of Mathematical Sciences Centre for Health Analytics and Modelling (CHaM), Strathmore University, Nairobi, Kenya.
Background: Measures of diagnostic test accuracy provide evidence of how well a test correctly identifies or rules-out disease. Commonly used diagnostic accuracy measures (DAMs) include sensitivity and specificity, predictive values, likelihood ratios, area under the receiver operator characteristic curve (AUROC), area under precision-recall curves (AUPRC), diagnostic effectiveness (accuracy), disease prevalence, and diagnostic odds ratio (DOR) etc. Most available analysis tools perform accuracy testing for a single diagnostic test using summarized data.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
Basque Center on Cognition, Brain and Language.
The present study uses event-related potentials (ERPs) to investigate lexicosemantic prediction in native speakers (L1) of English and advanced second language (L2) learners of English with Swedish as their L1. The main goal of the study was to examine whether learners recruit predictive mechanisms to the same extent as L1 speakers when a change in the linguistic environment renders prediction a useful strategy to pursue. The study, which uses a relatedness proportion paradigm adapted from Lau et al.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
December 2024
University of Massachusetts-Amherst, Department of Psychological and Brain Sciences.
Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.
View Article and Find Full Text PDFJ Cogn
January 2025
Department of Humanities, University of Trento, via Tommaso Gar 14, 38122, Trento, Italy.
The productive use of morphological information is considered one of the possible ways in which speakers of a language understand and learn unknown words. In the present study we investigate if, and how, also adult L2 learners exploit morphological information to process unknown words by analyzing the impact of language proficiency in the processing of novel derivations. Italian L2 learners, divided into three proficiency groups, participated in a lexical decision where pseudo-words could embed existing stems (e.
View Article and Find Full Text PDFJAMIA Open
February 2025
Department of Medicine, University of Wisconsin-Madison, Madison, WI 53792, United States.
Objective: To evaluate large language models (LLMs) for pre-test diagnostic probability estimation and compare their uncertainty estimation performance with a traditional machine learning classifier.
Materials And Methods: We assessed 2 instruction-tuned LLMs, Mistral-7B-Instruct and Llama3-70B-chat-hf, on predicting binary outcomes for Sepsis, Arrhythmia, and Congestive Heart Failure (CHF) using electronic health record (EHR) data from 660 patients. Three uncertainty estimation methods-Verbalized Confidence, Token Logits, and LLM Embedding+XGB-were compared against an eXtreme Gradient Boosting (XGB) classifier trained on raw EHR data.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!