Background: The automated classification of Helicobacter pylori infection status is gaining attention, distinguishing among uninfected (no history of H. pylori infection), current infection, and post-eradication. However, this classification has relatively low performance, primarily due to the intricate nature of the task. This study aims to develop a new multistage deep learning method for automatically classifying H. pylori infection status.

Methods: The proposed multistage deep learning method was developed using a training set of 538 subjects, then tested on a validation set of 146 subjects. The classification performance of this new method was compared with the findings of four physicians.

Results: The accuracy of our method was 87.7%, 83.6%, and 95.9% for uninfected, post-eradication, and currently infected cases, respectively, whereas that of the physicians was 81.7%, 76.5%, and 90.3%, respectively. When including the patient's H. pylori eradication history information, the classification accuracy of the method was 92.5%, 91.1%, and 98.6% for uninfected, post-eradication, and currently infected cases, respectively, whereas that of the physicians was 85.6%, 85.1%, and 97.4%, respectively.

Conclusion: The new multistage deep learning method shows potential for an innovative approach to gastric cancer screening. It can evaluate individual subjects' cancer risk based on endoscopic images and reduce the burden of physicians.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00535-024-02209-5DOI Listing

Publication Analysis

Top Keywords

multistage deep
16
deep learning
16
pylori infection
16
learning method
12
classification helicobacter
8
helicobacter pylori
8
infection status
8
endoscopic images
8
accuracy method
8
uninfected post-eradication
8

Similar Publications

Background: The automated classification of Helicobacter pylori infection status is gaining attention, distinguishing among uninfected (no history of H. pylori infection), current infection, and post-eradication. However, this classification has relatively low performance, primarily due to the intricate nature of the task.

View Article and Find Full Text PDF

Early identification of concrete cracks and multi-class detection can help to avoid future deformation or collapse in concrete structures. Available traditional detection and methodologies require enormous effort and time. To overcome such difficulties, current vision-based deep learning models can effectively detect and classify various concrete cracks.

View Article and Find Full Text PDF

In recent years, image-guided brachytherapy for cervical cancer has become an important treatment method for patients with locally advanced cervical cancer, and multi-modality image registration technology is a key step in this system. However, due to the patient's own movement and other factors, the deformation between the different modalities of images is discontinuous, which brings great difficulties to the registration of pelvic computed tomography (CT/) and magnetic resonance (MR) images. In this paper, we propose a multimodality image registration network based on multistage transformation enhancement features (MTEF) to maintain the continuity of the deformation field.

View Article and Find Full Text PDF

Fine-grained restoration of Mongolian patterns based on a multi-stage deep learning network.

Sci Rep

December 2024

College of Computer and Information Engineering, Inner Mongolia Agricultural University, Huhhot, 010000, Inner Mongolia, China.

Mongolian patterns are easily damaged by various factors in the process of inheritance and preservation, and the traditional manual restoration methods are time-consuming, laborious, and costly. With the development of deep learning technology and the rapid growth of the image restoration field, the existing image restoration methods are mostly aimed at natural scene images. They do not apply to Mongolian patterns with complex line texture structures and high saturation-rich colors.

View Article and Find Full Text PDF

Interactively Fusing Global and Local Features for Benign and Malignant Classification of Breast Ultrasound Images.

Ultrasound Med Biol

December 2024

School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China. Electronic address:

Objective: Breast ultrasound (BUS) is used to classify benign and malignant breast tumors, and its automatic classification can reduce subjectivity. However, current convolutional neural networks (CNNs) face challenges in capturing global features, while vision transformer (ViT) networks have limitations in effectively extracting local features. Therefore, this study aimed to develop a deep learning method that enables the interaction and updating of intermediate features between CNN and ViT to achieve high-accuracy BUS image classification.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!