This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4495307PMC
http://dx.doi.org/10.3389/fpsyg.2015.00886DOI Listing

Publication Analysis

Top Keywords

speech production
28
piano performance
24
performance speech
20
affective piano
12
physical constraints
12
speech
11
relations affective
8
affective music
8
music speech
8
dynamics
8

Similar Publications

Simulating Early Phonetic and Word Learning Without Linguistic Categories.

Dev Sci

March 2025

Laboratoire de Sciences Cognitives et de Psycholinguistique, Département d'Études Cognitives, ENS, EHESS, CNRS, PSL University, Paris, France.

Before they even talk, infants become sensitive to the speech sounds of their native language and recognize the auditory form of an increasing number of words. Traditionally, these early perceptual changes are attributed to an emerging knowledge of linguistic categories such as phonemes or words. However, there is growing skepticism surrounding this interpretation due to limited evidence of category knowledge in infants.

View Article and Find Full Text PDF

Perception and production of music and speech rely on auditory-motor coupling, a mechanism which has been linked to temporally precise oscillatory coupling between auditory and motor regions of the human brain, particularly in the beta frequency band. Recently, brain imaging studies using magnetoencephalography (MEG) have also shown that accurate auditory temporal predictions specifically depend on phase coherence between auditory and motor cortical regions. However, it is not yet clear whether this tight oscillatory phase coupling is an intrinsic feature of the auditory-motor loop, or whether it is only elicited by task demands.

View Article and Find Full Text PDF

Purpose: Research on vestibular function tests has advanced significantly over the past century. This study aims to evaluate research productivity, identify top contributors, and assess global collaboration to provide a comprehensive overview of trends and advancements in the field.

Method: A scientometric analysis was conducted using publications from the Scopus database, retrieved on January 5, 2024.

View Article and Find Full Text PDF

Introduction: Laryngeal muscle physiology is integral to many speech, voice, swallowing, and respiratory functions. A key determinant of a muscle's contractile properties, including its fatigue profile and capacity for force production, is the myosin heavy chain (MyHC) isoform that predominates in the muscle. This study surveys literature on the MyHC compositions of mammalian intrinsic laryngeal skeletal muscle to illustrate trends and gaps in laryngeal muscle fiber typing techniques, models, and concepts.

View Article and Find Full Text PDF

Unveiling Schizophrenia: a study with generalized functional linear mixed model via the investigation of functional random effects.

Biostatistics

December 2024

Center for Applied Statistics, School of Statistics, Renmin University of China, No. 59 Zhongguancun Street, Beijing, 100872, P.R. China.

Previous studies have identified attenuated pre-speech activity and speech sound suppression in individuals with Schizophrenia, with similar patterns observed in basic tasks entailing button-pressing to perceive a tone. However, it remains unclear whether these patterns are uniform across individuals or vary from person to person. Motivated by electroencephalographic (EEG) data from a Schizophrenia study, we develop a generalized functional linear mixed model (GFLMM) for repeated measurements by incorporating subject-specific functional random effects associated with multiple functional predictors.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!