Passive Acoustic Monitoring (PAM) is emerging as a solution for monitoring species and environmental change over large spatial and temporal scales. However, drawing rigorous conclusions based on acoustic recordings is challenging, as there is no consensus over which approaches are best suited for characterizing marine acoustic environments. Here, we describe the application of multiple machine-learning techniques to the analysis of two PAM datasets. We combine pre-trained acoustic classification models (VGGish, NOAA and Google Humpback Whale Detector), dimensionality reduction (UMAP), and balanced random forest algorithms to demonstrate how machine-learned acoustic features capture different aspects of the marine acoustic environment. The UMAP dimensions derived from VGGish acoustic features exhibited good performance in separating marine mammal vocalizations according to species and locations. RF models trained on the acoustic features performed well for labeled sounds in the 8 kHz range; however, low- and high-frequency sounds could not be classified using this approach. The workflow presented here shows how acoustic feature extraction, visualization, and analysis allow establishing a link between ecologically relevant information and PAM recordings at multiple scales, ranging from large-scale changes in the environment (i.e., changes in wind speed) to the identification of marine mammal species.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10880131PMC
http://dx.doi.org/10.1002/ece3.10951DOI Listing

Publication Analysis

Top Keywords

acoustic features
16
marine mammal
12
acoustic
11
passive acoustic
8
acoustic monitoring
8
marine acoustic
8
marine
6
features tool
4
tool visualize
4
visualize explore
4

Similar Publications

Music can evoke powerful emotions in listeners. However, the role that instrumental music (music without any vocal part) plays in conveying extra-musical meaning, above and beyond emotions, is still a debated question. We conducted a study wherein participants (N = 121) listened to twenty 15-second-long excerpts of polyphonic instrumental soundtrack music and reported (i) perceived emotions (e.

View Article and Find Full Text PDF

Background: Late-life depression (LLD) is a heterogenous disorder related to cognitive decline and neurodegenerative processes, raising a need for the development of novel biomarkers. We sought to provide preliminary evidence for acoustic speech signatures sensitive to LLD and their relationship to depressive dimensions.

Methods: Forty patients (24 female, aged 65-82 years) were assessed with the Geriatric Depression Scale (GDS).

View Article and Find Full Text PDF

Neurovisual Training With Acoustic Feedback: An Innovative Approach for Nystagmus Rehabilitation.

Arch Rehabil Res Clin Transl

December 2024

Section of Neurorehabilitation, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.

Nystagmus has various clinical manifestations, including downbeat, upbeat, and torsional types, each associated with distinct neurologic features. Current rehabilitative interventions focusing on fixation training and optical correction often fail to achieve complete resolution. When nystagmus coexists with fragile X-associated tremor/ataxia syndrome (FXTAS), functional impairments worsen, particularly affecting balance.

View Article and Find Full Text PDF

The development of deep convolutional generative adversarial network to synthesize odontocetes' clicks.

J Acoust Soc Am

January 2025

Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361005, China.

Odontocetes are capable of dynamically changing their echolocation clicks to efficiently detect targets, and learning their clicking strategy can facilitate the design of man-made detecting signals. In this study, we developed deep convolutional generative adversarial networks guided by an acoustic feature vector (AF-DCGANs) to synthesize narrowband clicks of the finless porpoise (Neophocaena phocaenoides sunameri) and broadband clicks of the bottlenose dolphins (Tursiops truncatus). The average short-time objective intelligibility (STOI), spectral correlation coefficient (Spe-CORR), waveform correlation coefficient (Wave-CORR), and dynamic time warping distance (DTW-Distance) of the synthetic clicks were 0.

View Article and Find Full Text PDF

Background: The two most commonly used methods to identify frailty are the frailty phenotype and the frailty index. However, both methods have limitations in clinical application. In addition, methods for measuring frailty have not yet been standardized.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!