This study focuses on the perception of music performances when contextual factors, such as room acoustics and instrument, change. We propose to distinguish the concept of "performance" from the one of "interpretation", which expresses the "artistic intention". Towards assessing this distinction, we carried out an experimental evaluation where 91 subjects were invited to listen to various audio recordings created by resynthesizing MIDI data obtained through Automatic Music Transcription (AMT) systems and a sensorized acoustic piano. During the resynthesis, we simulated different contexts and asked listeners to evaluate how much the interpretation changes when the context changes. Results show that: (1) MIDI format alone is not able to completely grasp the artistic intention of a music performance; (2) usual objective evaluation measures based on MIDI data present low correlations with the average subjective evaluation. To bridge this gap, we propose a novel measure which is meaningfully correlated with the outcome of the tests. In addition, we investigate multimodal machine learning by providing a new score-informed AMT method and propose an approximation algorithm for the -dispersion problem.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9007253 | PMC |
http://dx.doi.org/10.1007/s11042-022-12476-0 | DOI Listing |
JMIR Rehabil Assist Technol
December 2024
Centre de recherche interdisciplinaire en réadaptation du Montréal métropolitain (CRIR) - Institut universitaire sur la réadaptation en déficience physique de Montréal (IURDPM) du Centre intégré universitaire de santé et de services sociaux du Centre-Sud-de-l'Île-de-Montréal (CCSMTL), Université de Montréal, Institut de Réadaptation Gingras Lindsay de Montréal, 6300 avenue de Darlington, Montréal, QC, H3S 2J4, Canada, 1 514-343-6111.
Background: Stationary bikes are used in numerous rehabilitation settings, with most offering limited functionalities and types of training. Smart technologies, such as artificial intelligence and robotics, bring new possibilities to achieve rehabilitation goals. However, it is important that these technologies meet the needs of users in order to improve their adoption in current practice.
View Article and Find Full Text PDFIntroduction: Older adults are a heterogeneous group, and their care experience preferences are likely to be diverse and individualized. Thus, the aim of this study was to identify categories of older adults' care experience preferences and to examine similarities and differences across different age groups.
Methods: The initial categories of older adults' care experience preferences were identified through a qualitative review of narrative text (n = 3134) in the ADVault data set.
Brain Struct Funct
December 2024
Brain and Language Lab, Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria.
Why is it that some people seem to learn new languages faster and more easily than others? The present study investigates the neuroanatomical basis of language learning aptitude, with a focus on the multiplication pattern of the transverse temporal gyrus/gyri (TTG/TTGs) of the auditory cortex. The size and multiplication pattern of the first TTG (i.e.
View Article and Find Full Text PDFFront Robot AI
November 2024
Mechanical Engineering and Robotics Course, Faculty of Advanced Science and Technology, Ryukoku University, Otsu, Japan.
Recently, research on human-robot communication attracts many researchers. We believe that music is one of the important channel between human and robot, because it can convey emotional information. In this research, we focus on the violin performance by a robot.
View Article and Find Full Text PDFR Soc Open Sci
November 2024
Centre for Music and Science, University of Cambridge, 11 West Rd, Cambridge, UK.
Great musicians have a unique style and, with training, humans can learn to distinguish between these styles. What differences between performers enable us to make such judgements? We investigate this question by building a machine learning model that predicts performer identity from data extracted automatically from an audio recording. Such a model could be trained on all kinds of musical features, but here we focus specifically on rhythm, which (unlike harmony, melody and timbre) is relevant for any musical instrument.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!