Mutations in genes can alter their DNA patterns, and by recognizing these mutations, many carcinomas can be diagnosed in the progression stages. The human body contains many hidden and enigmatic features that humankind has not yet fully understood. A total of 7539 neoplasm cases were reported from 1 January 2021 to 31 December 2021. Of these, 3156 were seen in males (41.9%) and 4383 (58.1%) in female patients. Several machine learning and deep learning frameworks are already implemented to detect mutations, but these techniques lack generalized datasets and need to be optimized for better results. Deep learning-based neural networks provide the computational power to calculate the complex structures of gastric carcinoma-driven gene mutations. This study proposes deep learning approaches such as long and short-term memory, gated recurrent units and bi-LSTM to help in identifying the progression of gastric carcinoma in an optimized manner. This study includes 61 carcinogenic driver genes whose mutations can cause gastric cancer. The mutation information was downloaded from intOGen.org and normal gene sequences were downloaded from asia.ensembl.org, as explained in the data collection section. The proposed deep learning models are validated using the self-consistency test (SCT), 10-fold cross-validation test (FCVT), and independent set test (IST); the IST prediction metrics of accuracy, sensitivity, specificity, MCC and AUC of LSTM, Bi-LSTM, and GRU are 97.18%, 98.35%, 96.01%, 0.94, 0.98; 99.46%, 98.93%, 100%, 0.989, 1.00; 99.46%, 98.93%, 100%, 0.989 and 1.00, respectively.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10340236 | PMC |
http://dx.doi.org/10.3390/diagnostics13132291 | DOI Listing |
J Comput Biol
December 2024
Electrical, Computer and Biomedical Engineering, Toronto Metropolitan University, Toronto, Canada.
Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class are limited. From the learning perspective, this process contributes to the data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features.
View Article and Find Full Text PDFOral Radiol
December 2024
Department of Oral, Dental and Maxillofacial Radiology, Faculty of Dentistry, Ataturk University, Erzurum, 25240, Turkey.
Objective: The aim of this study is to determine the contact relationship and position of impacted mandibular third molar teeth (IMM) with the mandibular canal (MC) in panoramic radiography (PR) images using deep learning (DL) models trained with the help of cone beam computed tomography (CBCT) and DL to compare the performances of the architectures.
Methods: In this study, a total of 546 IMMs from 290 patients with CBCT and PR images were included. The performances of SqueezeNet, GoogLeNet, and Inception-v3 architectures in solving four problems on two different regions of interest (RoI) were evaluated.
Tomography
December 2024
Department of Nuclear Medicine and Molecular Imaging, Ajou University School of Medicine, Suwon 16499, Republic of Korea.
Background/objectives: Calculating the radiation dose from CT in F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.
Methods: The torso CT was segmented into six distinct regions using TotalSegmentator.
Tomography
December 2024
Department of Diagnostic Radiology, Kitasato University School of Medicine, Sagamihara 252-0374, Japan.
Objectives: We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).
Methods: CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.
Tomography
December 2024
Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 824005, Taiwan.
Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!