Vocal directivity refers to how directional the sound is that comes from a singer's mouth, that is, whether the sound is focused into a narrow stream of sound projecting in front of the singers or whether it is spread out all around the singer. This study investigates the long-term vocal directivity and acoustic power of professional opera singers and how these vary among subjects, among singing projections, and among vastly different acoustic environments. The vocal sound of eight professional opera singers (six females and two males) was measured in anechoic and reverberant rooms and in a recital hall. Subjects sang in four different ways: (1) paying great attention to intonation; (2) singing as in performance, with all the emotional connection intended by the composer; (3) imagining a large auditorium; and (4) imagining a small theatre. The same song was sung by all singers in all conditions. A head and torso simulator (HATS), radiating sound from its mouth, was used for comparison in all situations. Results show that individual singers have quite consistent long-term average directivity, even across conditions. Directivity varies substantially among singers. Singers are more directional than the standard HATS (which is a physical model of a talking person). The singer's formant region of the spectrum exhibits greater directivity than the lower-frequency range, and results indicate that singers control directivity (at least, incidentally) for different singing conditions as they adjust the spectral emphasis of their voices through their formants.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jvoice.2010.03.001DOI Listing

Publication Analysis

Top Keywords

vocal directivity
12
opera singers
12
singers
9
professional opera
8
directivity
7
sound
5
long-term horizontal
4
vocal
4
horizontal vocal
4
directivity opera
4

Similar Publications

Communication sound processing in mouse AC is lateralized. Both left and right AC are highly specialised and differ in auditory stimulus representation, functional connectivity and field topography. Previous studies have highlighted intracortical functional circuits that explain hemispheric stimulus preference.

View Article and Find Full Text PDF

Several studies have demonstrated that the severity of social communication problems, a core symptom of Autism Spectrum Disorder (ASD), is correlated with specific speech characteristics of ASD individuals. This suggests that it may be possible to develop speech analysis algorithms that can quantify ASD symptom severity from speech recordings in a direct and objective manner. Here we demonstrate the utility of a new open-source AI algorithm, ASDSpeech, which can analyze speech recordings of ASD children and reliably quantify their social communication difficulties across multiple developmental timepoints.

View Article and Find Full Text PDF

and Aims We conducted this research motivated by the incomplete knowledge of the changes made by resonance and harmonic filtering processes made by articulatory gestures in the supralar-yngeal level of the vocal tract. Aim of research The goal of the study is to evaluate the adaptive changes taking place at the oropharyngeal isthmus during sustained phonation. Methods We focused on exploring the dynamics of the oropharyngeal pavilion in voice professionals using Cone-Beam Computed Tomogra-phy (CBCT).

View Article and Find Full Text PDF

Effect of exogenous manipulation of glucocorticoid concentrations on meerkat heart rate, behaviour and vocal production.

Horm Behav

January 2025

Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland; Kalahari Meerkat Project, Kuruman River Reserve, Northern Cape, South Africa; Center for the Interdisciplinary Study of Language Evolution, ISLE, University of Zurich, Switzerland.

Encoding of emotional arousal in vocalisations is commonly observed in the animal kingdom, and provides a rapid means of information transfer about an individual's affective responses to internal and external stimuli. As a result, assessing affective arousal-related variation in the acoustic structure of vocalisations can provide insight into how animals perceive both internal and external stimuli, and how this is, in turn, communicated to con- or heterospecifics. However, the underlying physiological mechanisms driving arousal-related acoustic variation remains unclear.

View Article and Find Full Text PDF

Intermodulation frequencies reveal common neural assemblies integrating facial and vocal fearful expressions.

Cortex

December 2024

Institute of Research in Psychology (IPSY) & Institute of Neuroscience (IoNS), Louvain Bionics Center, University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne & Sion, Switzerland. Electronic address:

Effective social communication depends on the integration of emotional expressions coming from the face and the voice. Although there are consistent reports on how seeing and hearing emotion expressions can be automatically integrated, direct signatures of multisensory integration in the human brain remain elusive. Here we implemented a multi-input electroencephalographic (EEG) frequency tagging paradigm to investigate neural populations integrating facial and vocal fearful expressions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!