Vocal music and spoken language both have important roles in human communication, but it is unclear why these two different modes of vocal communication exist. Although similar, speech and song differ in certain design features. One interesting difference is in the pitch intonation contour, which consists of discrete tones in song, vs. gliding intonation contours in speech. Here, we investigated whether vocal phrases consisting of discrete pitches (song-like) or gliding pitches (speech-like) are remembered better, conducting three studies implementing auditory same-different tasks at three levels of difficulty. We tested two hypotheses: that discrete pitch contours aid auditory memory, independent of musical experience ("song memory advantage hypothesis"), or that the higher everyday experience perceiving and producing speech make speech intonation easier to remember ("experience advantage hypothesis"). We used closely matched stimuli, controlling for rhythm and timbre, and we included a stimulus intermediate between song-like and speech-like pitch contours (with partially gliding and partially discrete pitches). We also assessed participants' musicality to evaluate experience-dependent effects. We found that song-like vocal phrases are remembered better than speech-like vocal phrases, and that intermediate vocal phrases evoked a similar advantage to song-like vocal phrases. Participants with more musical experience were better in remembering all three types of vocal phrases. The precise roles of absolute and relative pitch perception and the influence of top-down vs. bottom-up processing should be clarified in future studies. However, our results suggest that one potential reason for the emergence of discrete pitch-a feature that characterises music across cultures-might be that it enhances auditory memory.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7758421 | PMC |
http://dx.doi.org/10.3389/fpsyg.2020.586723 | DOI Listing |
J Voice
January 2025
Department of Communication Sciences and Disorders, Bowling Green State University, Bowling Green, OH.
Objectives: This study aimed to identify voice instabilities across registration shifts produced by untrained female singers and describe them relative to changes in fundamental frequency, airflow, intensity, inferred adduction, and acoustic spectra.
Study Design: Multisignal descriptive study.
Methods: Five untrained female singers sang up to 30 repetitions of octave scales.
Elife
December 2024
Center for Neural Science, New York University, New York, United States.
In nature, animal vocalizations can provide crucial information about identity, including kinship and hierarchy. However, lab-based vocal behavior is typically studied during brief interactions between animals with no prior social relationship, and under environmental conditions with limited ethological relevance. Here, we address this gap by establishing long-term acoustic recordings from Mongolian gerbil families, a core social group that uses an array of sonic and ultrasonic vocalizations.
View Article and Find Full Text PDFJ Voice
November 2024
Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215; Department of Otolaryngology-Head and Neck Surgery, Boston University School of Medicine, Boston, Massachusetts 02118; Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215.
Objective: Creak is an acoustic feature found to discriminate speakers with adductor laryngeal dystonia (AdLD) from typical speakers with outstanding diagnostic accuracy. Yet creak is also used by typical speakers as a phrase-boundary marker. This study aims to compare the prevalence of creak across estimated breath groups in speakers with AdLD and controls to delineate physiological mechanisms underlying creak in AdLD.
View Article and Find Full Text PDFCogn Neurodyn
October 2024
Srishti Special School, Raipur, Chhattisgarh 492001 India.
Neurodevelopmental disorders (NDs) often hamper multiple functional prints of a child brain. Despite several studies on their neural and speech responses, multimodal researches on NDs are extremely rare. The present work examined the electroencephalography (EEG) and speech signals of the ND and control children, who performed "Hindi language" vocal tasks (V) of seven different categories, viz.
View Article and Find Full Text PDFFolia Phoniatr Logop
October 2024
Department of Computer and Information Sciences, Temple University, Philadelphia, Pennsylvania, USA.
Introduction: Social participation for emerging symbolic communicators on the autism spectrum is often restricted. This is due in part to the time and effort required for both children and partners to use traditional augmentative and alternative communication (AAC) technologies during fast-paced social routines. Innovations in artificial intelligence provide the potential for context-aware AAC technology that can provide just-in-time communication options based on linguistic input from partners to minimize the time and effort needed to use AAC technologies for social participation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!