Objectives: To determine the effect of maximally sustained phonation on efficacy of Vocal Function Exercises as measured by percent of maximum phonation time goal attained. The hypothesis was that maximally sustained phonation would result in greater improvements in percent of maximum phonation time goal attained.
Study Design: Randomized controlled trial.
Methods: A convenience sample of individuals with normal voice were recruited in a university academic clinic setting. Of 34 participants who volunteered for the study, 31 completed baseline assessment and 23 completed all study procedures. Participants were randomized to complete Vocal Function Exercises (traditional group TG), modified Vocal Function Exercises with reduced requirement for maximally sustained phonation (midpoint group MG), or modified Vocal Function Exercises with removed requirement for maximally sustained phonation (baseline group BG). The primary outcome measure was percent of maximum phonation time goal obtained during Vocal Function Exercises.
Results: The MG (p = 0.008) and TG (p = 0.001) groups significantly improved percent of maximum phonation time goal attained after six weeks of exercise, while the BG group (p = 0.0202) did not (ɑ = 0.0125). Difference among groups was not statistically significant (p = 0.67, ɑ = 0.0125). Hedges' g effect sizes of 0.29 (-0.66, 1.25) and 0.51 (-0.57, 1.58) were obtained comparing MG and TG groups, and BG and TG groups, respectively.
Conclusions: Greater requirements for maximally sustained phonation improved efficacy of Vocal Function Exercises in enhancing normal voice as measured by percent of maximum phonation time goal attained. Maximally sustained phonation may be modified to some extent while preserving efficacy of Vocal Function Exercises, however complete elimination of maximally sustained phonation may attenuate improvement. Additional research in a clinical population is warranted.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10175512 | PMC |
http://dx.doi.org/10.1016/j.jvoice.2022.10.012 | DOI Listing |
Biol Sex Differ
January 2025
Department of Psychology, Memorial University of Newfoundland and Labrador, St. John's NL, Canada.
As the earliest measure of social communication in rodents, ultrasonic vocalizations (USVs) in response to maternal separation are critical in preclinical research on neurodevelopmental disorders (NDDs). While sex differences in both USV production and behavioral outcomes are reported, many studies overlook sex as a biological variable in preclinical NDD models. We aimed to evaluate sex differences in USV call parameters and determine if USVs are differently impacted based on sex in the preclinical maternal immune activation (MIA) model.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK.
Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing.
View Article and Find Full Text PDFLife (Basel)
December 2024
Neuromodulation Center and Center for Clinical Research Learning, Spaulding Rehabilitation Hospital, Massachusetts General Hospital, Harvard Medical School, 1575 Cambridge Street, Cambridge, MA 02115, USA.
Background: This study aimed to explore the potential associations between voice metrics of patients with PD and their motor symptoms.
Methods: Motor and vocal data including the Unified Parkinson's Disease Rating Scale part III (UPDRS-III), Harmonic-Noise Ratio (HNR), jitter, shimmer, and smoothed cepstral peak prominence (CPPS) were analyzed through exploratory correlations followed by univariate linear regression analyses. We employed these four voice metrics as independent variables and the total and sub-scores of the UPDRS-III as dependent variables.
Sci Rep
January 2025
Department of Ethology, Eötvös Loránd University, Budapest, Hungary.
Dogs engage in social interactions with robots, yet whether they perceive them as social agents remains uncertain. In jealousy-evoking contexts, specific behaviours were observed exclusively when dogs' owners interacted with social, rather than non-social rivals. Here, we investigated whether a robot elicits jealous behaviour in dogs based on its level of animateness.
View Article and Find Full Text PDFPLoS One
January 2025
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, RP China.
This study develops an innovative method for analyzing and clustering tonal trends in Chinese Yue Opera to identify different vocal styles accurately. Linear interpolation is applied to process the time series data of vocal melodies, addressing inconsistent feature dimensions. The second-order difference method extracts tonal trend features.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!