Pavement cracks affect the structural stability and safety of roads, making accurate identification of crack for assessing the extent of damage and evaluating road health. However, traditional convolutional neural networks often struggle with issues such as missed detection and false detection when extracting cracks. This paper introduces a network called CPCDNet, designed to maintain continuous extraction of pavement cracks. The model incorporates a Crack align module (CAM) and a Weighted Edge Cross Entropy Loss Function (WECEL) to enhance the continuity of crack extraction in complex environments. Experimental results show that the proposed model achieves mIoU scores of 77.71%, 80.36%, 91.19%, and 71.16% on the public datasets CFD, Crack500, Deepcrack537, and Gaps384, respectively. Compared to other networks, the proposed method improves the continuity and accuracy of crack extraction.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11621684PMC
http://dx.doi.org/10.1038/s41598-024-81119-1DOI Listing

Publication Analysis

Top Keywords

convolutional neural
8
pavement cracks
8
crack extraction
8
crack
5
novel convolutional
4
neural network
4
network enhancing
4
enhancing continuity
4
continuity pavement
4
pavement crack
4

Similar Publications

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.

J Imaging Inform Med

January 2025

School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.

Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

Multi scale multi attention network for blood vessel segmentation in fundus images.

Sci Rep

January 2025

Department of Data Science and Artificial Intelligence, Sunway University, 47500, Petaling Jaya, Selangor Darul Ehsan, Malaysia.

Precise segmentation of retinal vasculature is crucial for the early detection, diagnosis, and treatment of vision-threatening ailments. However, this task is challenging due to limited contextual information, variations in vessel thicknesses, the complexity of vessel structures, and the potential for confusion with lesions. In this paper, we introduce a novel approach, the MSMA Net model, which overcomes these challenges by replacing traditional convolution blocks and skip connections with an improved multi-scale squeeze and excitation block (MSSE Block) and Bottleneck residual paths (B-Res paths) with spatial attention blocks (SAB).

View Article and Find Full Text PDF

Bone is a common site for the metastasis of malignant tumors, and Single Photon Emission Computed Tomography (SPECT) is widely used to detect these metastases. Accurate delineation of metastatic bone lesions in SPECT images is essential for developing treatment plans. However, current clinical practices rely on manual delineation by physicians, which is prone to variability and subjective interpretation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!