A long tradition in sound symbolism describes a host of sound-meaning linkages, or associations between individual speech sounds and concepts or object properties. Might sound symbolism extend beyond sound-meaning relationships to linkages between sounds and modes of thinking? Integrating sound symbolism with construal level theory, we investigate whether vowel sounds influence the mental level at which people represent and evaluate targets. We propose that back vowels evoke abstract, high-level construal, while front vowels induce concrete, low-level construal. Two initial studies link front vowels to the use of greater visual and conceptual precision, consistent with a construal account. Three subsequent studies explore construal-dependent tradeoffs as a function of vowel sound contained in the target's name. Evaluation of objects named with back vowels was driven by their high- over low-level features; front vowels reduced or reversed this differentiation. Thus, subtle linguistic cues appear capable of influencing the very nature of mental representation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0035543 | DOI Listing |
Data Brief
February 2025
Department of Electrical, Electronic and Communication Engineering, Military Institute of Science and Technology (MIST), Dhaka 1216, Bangladesh.
The dataset represents a significant advancement in Bengali lip-reading and visual speech recognition research, poised to drive future applications and technological progress. Despite Bengali's global status as the seventh most spoken language with approximately 265 million speakers, linguistically rich and widely spoken languages like Bengali have been largely overlooked by the research community. fills this gap by offering a pioneering dataset tailored for Bengali lip-reading, comprising visual data from 150 speakers across 54 classes, encompassing Bengali phonemes, alphabets, and symbols.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Department of Apparel and Space Design, Kyoto Women's University, Kyoto, Kyoto 605-8501, Japan.
Ever since de Saussure [Course in General Lingustics (Columbia University Press, 1916)], theorists of language have assumed that the relation between form and meaning of words is arbitrary. However, recently, a body of empirical research has established that language is embodied and contains iconicity. Sound symbolism, an intrinsic link language users perceive between word sound and properties of referents, is a representative example of iconicity in language and has offered profound insights into theories of language pertaining to language processing, language acquisition, and evolution.
View Article and Find Full Text PDFJ Exp Child Psychol
January 2025
Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, LMU University Hospital, LMU Munich, 80336 München, Germany.
Early spelling depends on the ability to understand the alphabetic principle and to translate speech sounds into visual symbols (letters). Thus, the ability to associate sound-symbol pairs might be an important predictor of spelling development. Here, we examined the relation between sound-symbol learning (SSL) and early spelling skills.
View Article and Find Full Text PDFNoise Health
January 2025
Department of Neurology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey.
Background: Patients with multiple sclerosis (MS) experience difficulties in understanding speech in noise despite having normal hearing.
Aim: This study aimed to determine the relationship between speech discrimination in noise (SDN) and medial olivocochlear reflex levels and to compare MS patients with a control group.
Material And Methods: Sixty participants with normal hearing, comprising 30 MS patients and 30 healthy controls, were included.
Entropy (Basel)
December 2024
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!