Classically, in the bouba-kiki association task, a subject is asked to find the best association between one of two shapes-a round one and a spiky one-and one of two pseudowords-bouba and kiki. Numerous studies report that spiky shapes are associated with kiki, and round shapes with bouba. This task is likely the most prevalent in the study of non-conventional relationships between linguistic forms and meanings, also known as sound symbolism. However, associative tasks are explicit in the sense that they highlight phonetic and visual contrasts and require subjects to establish a crossmodal link between stimuli of different natures. Additionally, recent studies have raised the question whether visual resemblances between the target shapes and the letters explain the pattern of association, at least in literate subjects. In this paper, we report a more implicit testing paradigm of the bouba-kiki effect with the use of a lexical decision task with character strings presented in round or spiky frames. Pseudowords and words are, furthermore, displayed with either an angular or a curvy font to investigate possible graphemic bias. Innovative analyses of response times are performed with GAMLSS models, which offer a large range of possible distributions of error terms, and a generalized Gama distribution is found to be the most appropriate. No sound symbolic effect appears to be significant, but an interaction effect is in particular observed between spiky shapes and angular letters leading to faster response times. We discuss these results with respect to the visual saliency of angular shapes, priming, brain activation, synaesthesia and ideasthesia.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6303039 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0208874 | PLOS |
Data Brief
February 2025
Department of Electrical, Electronic and Communication Engineering, Military Institute of Science and Technology (MIST), Dhaka 1216, Bangladesh.
The dataset represents a significant advancement in Bengali lip-reading and visual speech recognition research, poised to drive future applications and technological progress. Despite Bengali's global status as the seventh most spoken language with approximately 265 million speakers, linguistically rich and widely spoken languages like Bengali have been largely overlooked by the research community. fills this gap by offering a pioneering dataset tailored for Bengali lip-reading, comprising visual data from 150 speakers across 54 classes, encompassing Bengali phonemes, alphabets, and symbols.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Department of Apparel and Space Design, Kyoto Women's University, Kyoto, Kyoto 605-8501, Japan.
Ever since de Saussure [Course in General Lingustics (Columbia University Press, 1916)], theorists of language have assumed that the relation between form and meaning of words is arbitrary. However, recently, a body of empirical research has established that language is embodied and contains iconicity. Sound symbolism, an intrinsic link language users perceive between word sound and properties of referents, is a representative example of iconicity in language and has offered profound insights into theories of language pertaining to language processing, language acquisition, and evolution.
View Article and Find Full Text PDFJ Exp Child Psychol
January 2025
Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, LMU University Hospital, LMU Munich, 80336 München, Germany.
Early spelling depends on the ability to understand the alphabetic principle and to translate speech sounds into visual symbols (letters). Thus, the ability to associate sound-symbol pairs might be an important predictor of spelling development. Here, we examined the relation between sound-symbol learning (SSL) and early spelling skills.
View Article and Find Full Text PDFNoise Health
January 2025
Department of Neurology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey.
Background: Patients with multiple sclerosis (MS) experience difficulties in understanding speech in noise despite having normal hearing.
Aim: This study aimed to determine the relationship between speech discrimination in noise (SDN) and medial olivocochlear reflex levels and to compare MS patients with a control group.
Material And Methods: Sixty participants with normal hearing, comprising 30 MS patients and 30 healthy controls, were included.
Entropy (Basel)
December 2024
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!