Music can generate a positive effect in runners' performance and motivation. However, the practical implementation of music intervention during exercise is mostly absent from the literature. Therefore, this paper designs a playback sequence system for joggers by considering music emotion and physiological signals. This playback sequence is implemented by a music selection module that combines artificial intelligence techniques with physiological data and emotional music. In order to make the system operate for a long time, this paper improves the model and selection music module to achieve lower energy consumption. The proposed model obtains fewer FLOPs and parameters by using logarithm scaled Mel-spectrogram as input features. The accuracy, computational complexity, trainable parameters, and inference time are evaluated on the Bi-modal, 4Q emotion, and Soundtrack datasets. The experimental results show that the proposed model is better than that of Sarkar et al. and achieves competitive performance on Bi-modal (84.91%), 4Q emotion (92.04%), and Soundtrack (87.24%) datasets. More specifically, the proposed model reduces the computational complexity and inference time while maintaining the classification accuracy, compared to other models. Moreover, the size of the proposed model for network training is small, which can be applied to mobiles and other devices with limited computing resources. This study designed the overall playback sequence system by considering the relationship between music emotion and physiological situation during exercise. The playback sequence system can be adopted directly during exercise to improve users' exercise efficiency.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8839467 | PMC |
http://dx.doi.org/10.3390/s22030777 | DOI Listing |
PeerJ
January 2025
Department of Infectious Diseases and Public Health, Jockey Club College of Veterinary Medicine and Life Sciences, City University of Hong Kong, Hong Kong, Hong Kong SAR, China.
Recognition plays a key role in the social lives of gregarious species, enabling animals to distinguish among social partners and tailor their behaviour accordingly. As domesticated animals regularly interact with humans, as well as members of their own species, we might expect mechanisms used to discriminate between conspecifics to also apply to humans. Given that goats can combine visual and vocal cues to recognise one another, we investigated whether this cross-modal recognition extends to discriminating among familiar humans.
View Article and Find Full Text PDFLaryngoscope
December 2024
Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, U.S.A.
Objective: This pilot study was designed to test the tolerability of a lower scope position and feasibility of custom-designed MATLAB graphical user interface (GUI) used to analyze playback review of laryngeal high-speed videoendoscopy (laryngeal HSV) during healthy volitional dry swallows. We hypothesized this method would conceptually provide time resolution for glottic closure events compared with standard (30 frames per second, fps), and enable a means to measure timing, sequence, and duration of laryngeal movements during swallowing not otherwise visualized.
Methods: A total of 14 healthy adults (4 male, 22-80 years) participated.
Elife
July 2024
Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, United States.
The basolateral amygdala (BLA), a brain center of emotional expression, contributes to acoustic communication by first interpreting the meaning of social sounds in the context of the listener's internal state, then organizing the appropriate behavioral responses. We propose that modulatory neurochemicals such as acetylcholine (ACh) and dopamine (DA) provide internal-state signals to the BLA while an animal listens to social vocalizations. We tested this in a vocal playback experiment utilizing highly affective vocal sequences associated with either mating or restraint, then sampled and analyzed fluids within the BLA for a broad range of neurochemicals and observed behavioral responses of adult male and female mice.
View Article and Find Full Text PDFAnim Cogn
May 2024
Department of Biology, University of Naples Federico II, Naples, 80126, Italy.
This study investigates the musical perception skills of dogs through playback experiments. Dogs were trained to distinguish between two different target locations based on a sequence of four ascending or descending notes. A total of 16 dogs of different breeds, age, and sex, but all of them with at least basic training, were recruited for the study.
View Article and Find Full Text PDFJ Neurophysiol
May 2024
Institute of Biology, Leiden University, Leiden, The Netherlands.
Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!