Capturing Cross-linguistic Differences in Macro-rhythm: The Case of Italian and English.

Lang Speech

Basque Centre on Cognition, Brain and Language, Spain; Ikerbasque-Basque Foundation for Science, Spain.

Published: June 2020

We tested the hypothesis that languages can be classified by their degree of tonal rhythm (Jun, 2014). The tonal rhythms of English and Italian were quantified using the following parameters: (a) regularity of tonal alternations in time, measured as durational variability in peak-to-peak and valley-to-valley intervals; (b) magnitude of F0 excursions, measured as the range of frequencies covered by the speaker between consecutive F0 maxima and minima; (c) number of tonal target points per intonational unit; and (d) similarity of F0 rising and falling contours within intonational units. The results show that, as predicted by Jun's prosodic typology (2014), Italian has a stronger tonal rhythm than English, expressed by higher regularity in the distribution of F0 minima turning points, larger F0 excursions, and more frequent tonal targets, indicating alternating phonological H and L tones. This cross-language difference can be explained by the relative load of F0 and durational ratios on the perception and production of speech rhythm and prominence. We suggest that research on the role of speech rhythm in speech processing and language acquisition should not be restricted to syllabic rhythm, but should also examine the role of cross-language differences in tonal rhythm.

Download full-text PDF

Source
http://dx.doi.org/10.1177/0023830919835849DOI Listing

Publication Analysis

Top Keywords

tonal rhythm
12
speech rhythm
8
tonal
7
rhythm
6
capturing cross-linguistic
4
cross-linguistic differences
4
differences macro-rhythm
4
macro-rhythm case
4
case italian
4
italian english
4

Similar Publications

Existing emotion-driven music generation models heavily rely on labeled data and lack interpretability and controllability of emotions. To address these limitations, a semi-supervised emotion-driven music generation model based on category-dispersed Gaussian mixture variational autoencoders is proposed. Initially, a controllable music generation model is introduced, which disentangles and manipulates rhythm and tonal features, enabling controlled music generation.

View Article and Find Full Text PDF
Article Synopsis
  • The study focuses on a new transformer-based architecture, TNet-Full, designed for classifying Mandarin tones using speech characteristics like fundamental frequency (F0) values and syllable/word boundaries.
  • Key components of TNet-Full include a contour encoder, rhythm encoder, and cross-attention mechanisms that enhance the interaction between tone contours and rhythmic information.
  • The model shows significant improvements in accuracy—24.4% for read speech and 6.3% for conversational speech—compared to a simpler baseline, indicating better tone recognition through stable temporal organization of syllables.
View Article and Find Full Text PDF

A 62-year-old musician-MM-developed amusia after a right middle-cerebral-artery infarction. Initially, MM showed melodic deficits while discriminating pitch-related differences in melodies, musical memory problems, and impaired sensitivity to tonal structures, but normal pitch discrimination and spectral resolution thresholds, and normal cognitive and language abilities. His rhythmic processing was intact when pitch variations were removed.

View Article and Find Full Text PDF

Emotional and musical factors combined with song-specific age predict the subjective autobiographical saliency of music in older adults.

Psychol Music

May 2024

Music, Ageing and Rehabilitation Team, Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.

Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump.

View Article and Find Full Text PDF

Several studies in the last 40 years have used electroencephalography (EEG) to recognize patterns of brain electrical activity correlated with emotions evoked by various stimuli. For example, the frontal alpha and theta asymmetry models to distinguish musical emotions and musical pleasure, respectively. Since these studies have used mainly tonal music, in this study, we decided to incorporate both tonal ( = 8) and atonal ( = 8) musical stimuli to observe the subjective and electrophysiological responses associated with valence, arousal, pleasure, and familiarity, from 25 nonmusician Mexican adults (10 females, 15 males; = 37.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!