In a traditional convolutional neural network structure, pooling layers generally use an average pooling method: a non-overlapping pooling. However, this condition results in similarities in the extracted image features, especially for the hyperspectral images of a continuous spectrum, which makes it more difficult to extract image features with differences, and image detail features are easily lost. This result seriously affects the accuracy of image classification. Thus, a new overlapping pooling method is proposed, where maximum pooling is used in an improved convolutional neural network to avoid the fuzziness of average pooling. The step size used is smaller than the size of the pooling kernel to achieve overlapping and coverage between the outputs of the pooling layer. The dataset selected for this experiment was the Indian Pines dataset, collected by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor. Experimental results show that using the improved convolutional neural network for remote sensing image classification can effectively improve the details of the image and obtain a high classification accuracy.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6210679PMC
http://dx.doi.org/10.3390/s18103587DOI Listing

Publication Analysis

Top Keywords

convolutional neural
16
neural network
16
image classification
12
pooling
9
remote sensing
8
sensing image
8
average pooling
8
pooling method
8
image features
8
improved convolutional
8

Similar Publications

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.

J Imaging Inform Med

January 2025

School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.

Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

The problem of ground-level ozone (O) pollution has become a global environmental challenge with far-reaching impacts on public health and ecosystems. Effective control of ozone pollution still faces complex challenges from factors such as complex precursor interactions, variable meteorological conditions and atmospheric chemical processes. To address this problem, a convolutional neural network (CNN) model combining the improved particle swarm optimization (IPSO) algorithm and SHAP analysis, called SHAP-IPSO-CNN, is developed in this study, aiming to reveal the key factors affecting ground-level ozone pollution and their interaction mechanisms.

View Article and Find Full Text PDF

Multi scale multi attention network for blood vessel segmentation in fundus images.

Sci Rep

January 2025

Department of Data Science and Artificial Intelligence, Sunway University, 47500, Petaling Jaya, Selangor Darul Ehsan, Malaysia.

Precise segmentation of retinal vasculature is crucial for the early detection, diagnosis, and treatment of vision-threatening ailments. However, this task is challenging due to limited contextual information, variations in vessel thicknesses, the complexity of vessel structures, and the potential for confusion with lesions. In this paper, we introduce a novel approach, the MSMA Net model, which overcomes these challenges by replacing traditional convolution blocks and skip connections with an improved multi-scale squeeze and excitation block (MSSE Block) and Bottleneck residual paths (B-Res paths) with spatial attention blocks (SAB).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!