Classification and Recognition Method of Non-Cooperative Objects Based on Deep Learning.

Sensors (Basel)

National Key Laboratory of Science and Technology on Tunable Laser, Harbin Institute of Technology, Harbin 150080, China.

Published: January 2024

AI Article Synopsis

Article Abstract

Accurately classifying and identifying non-cooperative targets is paramount for modern space missions. This paper proposes an efficient method for classifying and recognizing non-cooperative targets using deep learning, based on the principles of the micro-Doppler effect and laser coherence detection. The theoretical simulations and experimental verification demonstrate that the accuracy of target classification for different targets can reach 100% after just one round of training. Furthermore, after 10 rounds of training, the accuracy of target recognition for different attitude angles can stabilize at 100%.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10818946PMC
http://dx.doi.org/10.3390/s24020583DOI Listing

Publication Analysis

Top Keywords

deep learning
8
non-cooperative targets
8
accuracy target
8
classification recognition
4
recognition method
4
method non-cooperative
4
non-cooperative objects
4
objects based
4
based deep
4
learning accurately
4

Similar Publications

Purpose: To develop an artificial intelligence (AI) algorithm for automated measurements of spinopelvic parameters on lateral radiographs and compare its performance to multiple experienced radiologists and surgeons.

Methods: On lateral full-spine radiographs of 295 consecutive patients, a two-staged region-based convolutional neural network (R-CNN) was trained to detect anatomical landmarks and calculate thoracic kyphosis (TK), lumbar lordosis (LL), sacral slope (SS), and sagittal vertical axis (SVA). Performance was evaluated on 65 radiographs not used for training, which were measured independently by 6 readers (3 radiologists, 3 surgeons), and the median per measurement was set as the reference standard.

View Article and Find Full Text PDF

A multicenter study of neurofibromatosis type 1 utilizing deep learning for whole body tumor identification.

NPJ Digit Med

January 2025

Neurofibromatosis Type 1 Center and Laboratory for Neurofibromatosis Type 1 Research, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.

Deep-learning models have shown promise in differentiating between benign and malignant lesions. Previous studies have primarily focused on specific anatomical regions, overlooking tumors occurring throughout the body with highly heterogeneous whole-body backgrounds. Using neurofibromatosis type 1 (NF1) as an example, this study developed highly accurate MRI-based deep-learning models for the early automated screening of malignant peripheral nerve sheath tumors (MPNSTs) against complex whole-body background.

View Article and Find Full Text PDF

We aimed to build a robust classifier for the MGMT methylation status of glioblastoma in multiparametric MRI. We focused on multi-habitat deep image descriptors as our basic focus. A subset of the BRATS 2021 MGMT methylation dataset containing both MGMT class labels and segmentation masks was used.

View Article and Find Full Text PDF

Exploring the potential of advanced artificial intelligence technology in predicting microsatellite instability (MSI) and Ki-67 expression of endometrial cancer (EC) is highly significant. This study aimed to develop a novel hybrid radiomics approach integrating multiparametric magnetic resonance imaging (MRI), deep learning, and multichannel image analysis for predicting MSI and Ki-67 status. A retrospective study included 156 EC patients who were subsequently categorized into MSI and Ki-67 groups.

View Article and Find Full Text PDF

In order to solve the limitations of flipped classroom in personalized teaching and interactive effect improvement, this paper designs a new model of flipped classroom in colleges and universities based on Virtual Reality (VR) by combining the algorithm of Contrastive Language-Image Pre-Training (CLIP). Through cross-modal data fusion, the model deeply combines students' operation behavior with teaching content, and improves teaching effect through intelligent feedback mechanism. The test data shows that the similarity between video and image modes reaches 0.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!