Convolutional neural networks have been widely used for identifying diabetic retinopathy on color fundus images. For such application, we proposed a novel framework for the convolutional neural network architecture by embedding a preprocessing layer followed by the first convolutional layer to increase the performance of the convolutional neural network classifier. Two image enhancement techniques i.e. 1- Contrast Enhancement 2- Contrast-limited adaptive histogram equalization were separately embedded in the proposed layer and the results were compared. For identification of exudates, hemorrhages and microaneurysms, the proposed framework achieved the total accuracy of 87.6%, and 83.9% for the contrast enhancement and contrast-limited adaptive histogram equalization layers, respectively. However, the total accuracy of the convolutional neural network alone without the prreprocessing layer was found to be 81.4%. Consequently, the new convolutional neural network architecture with the proposed preprocessing layer improved the performance of convolutional neural network.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC.2018.8513606DOI Listing

Publication Analysis

Top Keywords

convolutional neural
28
neural network
24
convolutional
8
layer convolutional
8
diabetic retinopathy
8
network architecture
8
preprocessing layer
8
performance convolutional
8
contrast enhancement
8
enhancement contrast-limited
8

Similar Publications

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.

J Imaging Inform Med

January 2025

School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.

Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

The problem of ground-level ozone (O) pollution has become a global environmental challenge with far-reaching impacts on public health and ecosystems. Effective control of ozone pollution still faces complex challenges from factors such as complex precursor interactions, variable meteorological conditions and atmospheric chemical processes. To address this problem, a convolutional neural network (CNN) model combining the improved particle swarm optimization (IPSO) algorithm and SHAP analysis, called SHAP-IPSO-CNN, is developed in this study, aiming to reveal the key factors affecting ground-level ozone pollution and their interaction mechanisms.

View Article and Find Full Text PDF

Multi scale multi attention network for blood vessel segmentation in fundus images.

Sci Rep

January 2025

Department of Data Science and Artificial Intelligence, Sunway University, 47500, Petaling Jaya, Selangor Darul Ehsan, Malaysia.

Precise segmentation of retinal vasculature is crucial for the early detection, diagnosis, and treatment of vision-threatening ailments. However, this task is challenging due to limited contextual information, variations in vessel thicknesses, the complexity of vessel structures, and the potential for confusion with lesions. In this paper, we introduce a novel approach, the MSMA Net model, which overcomes these challenges by replacing traditional convolution blocks and skip connections with an improved multi-scale squeeze and excitation block (MSSE Block) and Bottleneck residual paths (B-Res paths) with spatial attention blocks (SAB).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!