We tested the hypothesis that languages can be classified by their degree of tonal rhythm (Jun, 2014). The tonal rhythms of English and Italian were quantified using the following parameters: (a) regularity of tonal alternations in time, measured as durational variability in peak-to-peak and valley-to-valley intervals; (b) magnitude of F0 excursions, measured as the range of frequencies covered by the speaker between consecutive F0 maxima and minima; (c) number of tonal target points per intonational unit; and (d) similarity of F0 rising and falling contours within intonational units. The results show that, as predicted by Jun's prosodic typology (2014), Italian has a stronger tonal rhythm than English, expressed by higher regularity in the distribution of F0 minima turning points, larger F0 excursions, and more frequent tonal targets, indicating alternating phonological H and L tones. This cross-language difference can be explained by the relative load of F0 and durational ratios on the perception and production of speech rhythm and prominence. We suggest that research on the role of speech rhythm in speech processing and language acquisition should not be restricted to syllabic rhythm, but should also examine the role of cross-language differences in tonal rhythm.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/0023830919835849 | DOI Listing |
Existing emotion-driven music generation models heavily rely on labeled data and lack interpretability and controllability of emotions. To address these limitations, a semi-supervised emotion-driven music generation model based on category-dispersed Gaussian mixture variational autoencoders is proposed. Initially, a controllable music generation model is introduced, which disentangles and manipulates rhythm and tonal features, enabling controlled music generation.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2024
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan.
Neurocase
February 2024
Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.
A 62-year-old musician-MM-developed amusia after a right middle-cerebral-artery infarction. Initially, MM showed melodic deficits while discriminating pitch-related differences in melodies, musical memory problems, and impaired sensitivity to tonal structures, but normal pitch discrimination and spectral resolution thresholds, and normal cognitive and language abilities. His rhythmic processing was intact when pitch variations were removed.
View Article and Find Full Text PDFPsychol Music
May 2024
Music, Ageing and Rehabilitation Team, Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump.
View Article and Find Full Text PDFBehav Neurosci
October 2024
Departamento de Investigacion Musical, Centro Morelense de las Artes.
Several studies in the last 40 years have used electroencephalography (EEG) to recognize patterns of brain electrical activity correlated with emotions evoked by various stimuli. For example, the frontal alpha and theta asymmetry models to distinguish musical emotions and musical pleasure, respectively. Since these studies have used mainly tonal music, in this study, we decided to incorporate both tonal ( = 8) and atonal ( = 8) musical stimuli to observe the subjective and electrophysiological responses associated with valence, arousal, pleasure, and familiarity, from 25 nonmusician Mexican adults (10 females, 15 males; = 37.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!