The use of drones has recently gained popularity in a diverse range of applications, such as aerial photography, agriculture, search and rescue operations, the entertainment industry, and more. However, misuse of drone technology can potentially lead to military threats, terrorist acts, as well as privacy and safety breaches. This emphasizes the need for effective and fast remote detection of potentially threatening drones. In this study, we propose a novel approach for automatic drone detection utilizing the usage of both radio frequency communication signals and acoustic signals derived from UAV rotor sounds. In particular, we propose the use of classical and deep machine-learning techniques and the fusion of RF and acoustic features for efficient and accurate drone classification. Distinct types of ML-based classifiers have been examined, including CNN- and RNN-based networks and the classical SVM method. The proposed approach has been evaluated with both frequency and audio features using common drone datasets, demonstrating better accuracy than existing state-of-the-art methods, especially in low SNR scenarios. The results presented in this paper show a classification accuracy of approximately 91% at an SNR ratio of -10 dB using the LSTM network and fused features.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11054550 | PMC |
http://dx.doi.org/10.3390/s24082427 | DOI Listing |
Neural Netw
January 2025
School of automotive studies, Tongji University, Shanghai 201804, China.
Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration when the signal-to-noise ratio (SNR) is relatively high. Real-world noisy scenarios typically exhibit widely varying noise levels.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Oceanography and Spatial Information, China University of Petroleum East China-Qingdao Campus, Qingdao 266580, China.
Salt marsh vegetation in the Yellow River Delta, including (), (), and (), is essential for the stability of wetland ecosystems. In recent years, salt marsh vegetation has experienced severe degradation, which is primarily due to invasive species and human activities. Therefore, the accurate monitoring of the spatial distribution of these vegetation types is critical for the ecological protection and restoration of the Yellow River Delta.
View Article and Find Full Text PDFCommun Biol
January 2025
Western Institute for Neuroscience, Western University, London, ON, Canada.
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds.
View Article and Find Full Text PDFJASA Express Lett
January 2025
Speech and Hearing Science Department, University of Illinois at Urbana-Champaign, Champaign, Illinois 61820,
Harmonicity is an organizing principle in the auditory system, facilitating auditory object formation. The goal of the current study is to determine if harmonicity also facilitates binaural fusion. Participants listened to pairs of two-tone harmonic complex tones that were harmonically or inharmonically related to each other.
View Article and Find Full Text PDFAnn N Y Acad Sci
January 2025
Hainan Institute, Zhejiang University, Sanya, China.
In this paper, we introduce FUSION-ANN, a novel artificial neural network (ANN) designed for acoustic emission (AE) signal classification. FUSION-ANN comprises four distinct ANN branches, each housing an independent multilayer perceptron. We extract denoised features of speech recognition such as linear predictive coding, Mel-frequency cepstral coefficient, and gammatone cepstral coefficient to represent AE signals.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!