A bridge disease identification approach based on an enhanced YOLO v3 algorithm is suggested to increase the accuracy of apparent disease detection of concrete bridges under complex backgrounds. First, the YOLO v3 network structure is enhanced to better accommodate the dense distribution and large variation of disease scale characteristics, and the detection layer incorporates the squeeze and excitation (SE) networks attention mechanism module and spatial pyramid pooling module to strengthen the semantic feature extraction ability. Secondly, CIoU with better localization ability is selected as the loss function for training. Finally, the K-means algorithm is used for anchor frame clustering on the bridge surface disease defects dataset. 1363 datasets containing exposed reinforcement, spalling, and water erosion damage of bridges are produced, and network training is done after manual labelling and data improvement in order to test the efficacy of the algorithm described in this paper. According to the trial results, the YOLO v3 model has enhanced more than the original model in terms of precision rate, recall rate, Average Precision (AP), and other indicators. Its overall mean Average Precision (mAP) value has also grown by 5.5%. With the RTX2080Ti graphics card, the detection frame rate increases to 84 Frames Per Second, enabling more precise and real-time bridge illness detection.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11018769PMC
http://dx.doi.org/10.1038/s41598-024-58707-2DOI Listing

Publication Analysis

Top Keywords

enhanced yolo
8
damage bridges
8
complex backgrounds
8
average precision
8
detection
5
enhanced
4
yolo precise
4
precise detection
4
detection apparent
4
apparent damage
4

Similar Publications

Hepatocellular carcinoma (HCC) is a prevalent cancer that significantly contributes to mortality globally, primarily due to its late diagnosis. Early detection is crucial yet challenging. This study leverages the potential of deep learning (DL) technologies, employing the You Only Look Once (YOLO) architecture, to enhance the detection of HCC in computed tomography (CT) images, aiming to improve early diagnosis and thereby patient outcomes.

View Article and Find Full Text PDF

In modern agriculture, the proliferation of weeds in cotton fields poses a significant threat to the healthy growth and yield of crops. Therefore, efficient detection and control of cotton field weeds are of paramount importance. In recent years, deep learning models have shown great potential in the detection of cotton field weeds, achieving high-precision weed recognition.

View Article and Find Full Text PDF

Background: This study aimed to develop and evaluate the detection and classification performance of different deep learning models on carotid plaque ultrasound images to achieve efficient and precise ultrasound screening for carotid atherosclerotic plaques.

Methods: This study collected 5611 carotid ultrasound images from 3683 patients from four hospitals between September 17, 2020, and December 17, 2022. By cropping redundant information from the images and annotating them using professional physicians, the dataset was divided into a training set (3927 images) and a test set (1684 images).

View Article and Find Full Text PDF

This study aims to improve the detection of dental burs, which are often undetected due to their minuscule size, slender profile, and substantial manufacturing output. The present study introduces You Only Look Once-Dental bur (YOLO-DB), an innovative deep learning-driven methodology for the accurate detection and counting of dental burs. A Lightweight Asymmetric Dual Convolution module (LADC) was devised to diminish the detrimental effects of extraneous features on the model's precision, thereby enhancing the feature extraction network.

View Article and Find Full Text PDF

Autonomous vehicles, often known as self-driving cars, have emerged as a disruptive technology with the promise of safer, more efficient, and convenient transportation. The existing works provide achievable results but lack effective solutions, as accumulation on roads can obscure lane markings and traffic signs, making it difficult for the self-driving car to navigate safely. Heavy rain, snow, fog, or dust storms can severely limit the car's sensors' ability to detect obstacles, pedestrians, and other vehicles, which pose potential safety risks.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!