Sharp and round shapes of seen objects have distinct influences on vowel and consonant articulation.

Psychol Res

Institute of Behavioural Sciences, Phonetics and Speech Synthesis Research Group, University of Helsinki, Siltavuorenpenger 5 A, Helsinki PL 9, 00014, Helsinki, Finland.

Published: July 2017

The shape and size-related sound symbolism phenomena assume that, for example, the vowel [i] and the consonant [t] are associated with sharp-shaped and small-sized objects, whereas [ɑ] and [m] are associated with round and large objects. It has been proposed that these phenomena are mostly based on the involvement of articulatory processes in representing shape and size properties of objects. For example, [i] might be associated with sharp and small objects, because it is produced by a specific front-close shape of articulators. Nevertheless, very little work has examined whether these object properties indeed have impact on speech sound vocalization. In the present study, the participants were presented with a sharp- or round-shaped object in a small or large size. They were required to pronounce one out of two meaningless speech units (e.g., [i] or [ɑ]) according to the size or shape of the object. We investigated how a task-irrelevant object property (e.g., the shape when responses are made according to size) influences reaction times, accuracy, intensity, fundamental frequency, and formant 1 and formant 2 of vocalizations. The size did not influence vocal responses but shape did. Specifically, the vowel [i] and consonant [t] were vocalized relatively rapidly when the object was sharp-shaped, whereas [u] and [m] were vocalized relatively rapidly when the object was round-shaped. The study supports the view that the shape-related sound symbolism phenomena might reflect mapping of the perceived shape with the corresponding articulatory gestures.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00426-016-0778-xDOI Listing

Publication Analysis

Top Keywords

sound symbolism
8
symbolism phenomena
8
vowel [i]
8
[i] consonant
8
consonant [t]
8
vocalized rapidly
8
rapidly object
8
shape
7
object
6
objects
5

Similar Publications

The dataset represents a significant advancement in Bengali lip-reading and visual speech recognition research, poised to drive future applications and technological progress. Despite Bengali's global status as the seventh most spoken language with approximately 265 million speakers, linguistically rich and widely spoken languages like Bengali have been largely overlooked by the research community. fills this gap by offering a pioneering dataset tailored for Bengali lip-reading, comprising visual data from 150 speakers across 54 classes, encompassing Bengali phonemes, alphabets, and symbols.

View Article and Find Full Text PDF

Ever since de Saussure [Course in General Lingustics (Columbia University Press, 1916)], theorists of language have assumed that the relation between form and meaning of words is arbitrary. However, recently, a body of empirical research has established that language is embodied and contains iconicity. Sound symbolism, an intrinsic link language users perceive between word sound and properties of referents, is a representative example of iconicity in language and has offered profound insights into theories of language pertaining to language processing, language acquisition, and evolution.

View Article and Find Full Text PDF

Sound-symbol learning and the relationship to spelling in first-grade children.

J Exp Child Psychol

January 2025

Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, LMU University Hospital, LMU Munich, 80336 München, Germany.

Early spelling depends on the ability to understand the alphabetic principle and to translate speech sounds into visual symbols (letters). Thus, the ability to associate sound-symbol pairs might be an important predictor of spelling development. Here, we examined the relation between sound-symbol learning (SSL) and early spelling skills.

View Article and Find Full Text PDF

Background: Patients with multiple sclerosis (MS) experience difficulties in understanding speech in noise despite having normal hearing.

Aim: This study aimed to determine the relationship between speech discrimination in noise (SDN) and medial olivocochlear reflex levels and to compare MS patients with a control group.

Material And Methods: Sixty participants with normal hearing, comprising 30 MS patients and 30 healthy controls, were included.

View Article and Find Full Text PDF

Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!