A fundamental assumption regarding spoken language is that the relationship between sound and meaning is essentially arbitrary. The present investigation questioned this arbitrariness assumption by examining the influence of potential non-arbitrary mappings between sound and meaning on word learning in adults. Native English-speaking monolinguals learned meanings for Japanese words in a vocabulary-learning task. Spoken Japanese words were paired with English meanings that: (1) matched the actual meaning of the Japanese word (e.g., "hayai" paired with fast); (2) were antonyms for the actual meaning (e.g., "hayai" paired with slow); or (3) were randomly selected from the set of antonyms (e.g., "hayai" paired with blunt). The results showed that participants learned the actual English equivalents and antonyms for Japanese words more accurately and responded faster than when learning randomly paired meanings. These findings suggest that natural languages contain non-arbitrary links between sound structure and meaning and further, that learners are sensitive to these non-arbitrary relationships within spoken language.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2009.04.001 | DOI Listing |
Annu Rev Biomed Eng
January 2025
Department of Neurological Surgery, University of California, Davis, California, USA; email:
People who have lost the ability to speak due to neurological injuries would greatly benefit from assistive technology that provides a fast, intuitive, and naturalistic means of communication. This need can be met with brain-computer interfaces (BCIs): medical devices that bypass injured parts of the nervous system and directly transform neural activity into outputs such as text or sound. BCIs for restoring movement and typing have progressed rapidly in recent clinical trials; speech BCIs are the next frontier.
View Article and Find Full Text PDFJ Otol
July 2024
Department of Otolaryngology Head and Neck Surgery, Chinese PLA General Hospital, Chinese PLA Medical School, Beijing, 100853, China.
Purpose: To analyze the effect of right versus left long-term single-sided deafness (SSD) on sound source localization (SSL), discuss the necessity of intervention and treatment for SSD patients, and analyze the therapeutic effect of long-term unilateral cochlear implantation (UCI) from the perspective of SSL.
Methods: This study included 25 patients with SSD, 11 patients with UCI, and 30 participants with normal hearing (NH). Their SSL ability was tested by obtaining their average root mean square (RMS) error values of SSL test.
Front Psychiatry
December 2024
Department for Child and Adolescent Psychiatry/Psychotherapy, University Clinic for Psychosomatic Medicine and Psychotherapy Ulm, Ulm, Germany.
Background: The Patient Health Questionnaire (PHQ-9) is a popular tool for assessing depressive symptoms in both general and clinical populations. The present study used a large representative sample of the German adult population to confirm desired psychometric functioning and to provide updated population norms.
Methods: The following psychometric properties were assessed: (i) Item characteristics (item means, standard deviations and inter-item correlations), (ii) Construct validity (correlations of the PHQ-9 sum-score with scores obtained from instruments assessing depression, anxiety and somatization (GAD-7, BSI-18), (iii) Internal consistency (coefficient omega), (iv) Factorial validity (via confirmatory factor analysis of the assumed one factorial model) as well as (v) Measurement invariance (via multi-group confirmatory factor analyses across gender, age, income and education).
J Neural Eng
December 2024
Trinity College Dublin, College Green, Dublin 2, Dublin, D02 PN40, IRELAND.
Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. Recent work demonstrated that such a predictive process can be probed from neural signals recorded during ecologically-valid speech listening tasks by using linear lagged models, such as the temporal response function.
View Article and Find Full Text PDFCureus
November 2024
Biological Sciences, Ridge High School, Basking Ridge, USA.
Objectives Emotional intelligence (EI) refers to the ability to perceive, understand, and manage emotions effectively, a skill essential in the high-stress environment of healthcare. Research suggests that healthcare professionals with higher EI are better equipped to handle stress, maintain resilience, and make sound judgments under pressure, ultimately enhancing job performance. This paper examines EI's predictive role in managing job performance and resistance to stress among healthcare professionals, aiming to explore how elevated EI may strengthen their coping abilities and contribute to improved stress management, professional judgment, and resilience in challenging work settings.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!