Prediction of the human papillomavirus status in patients with oropharyngeal squamous cell carcinoma by FDG-PET imaging dataset using deep learning analysis: A hypothesis-generating study.

Eur J Radiol

Department of Radiology, Boston Medical Center, Boston University School of Medicine, Boston, MA, United States; Department of Radiation Oncology, Boston Medical Center, Boston University School of Medicine, Boston, MA, United States; Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston University School of Medicine, Boston, MA, United States. Electronic address:

Published: May 2020

Purpose: To assess the diagnostic accuracy of imaging-based deep learning analysis to differentiate between human papillomavirus (HPV) positive and negative oropharyngeal squamous cell carcinomas (OPSCCs) using FDG-PET images.

Methods: One hundred and twenty patients with OPSCC who underwent pretreatment FDG-PET/CT were included and divided into the training 90 patients and validation 30 patients cohorts. In the training session, 2160 FDG-PET images were analyzed after data augmentation process by a deep learning technique to create a diagnostic model to discriminate between HPV-positive and HPV-negative OPSCCs. Validation cohort data were subsequently analyzed for confirmation of diagnostic accuracy in determining HPV status by the deep learning-based diagnosis model. In addition, two radiologists evaluated the validation cohort image-data to determine the HPV status based on each tumor's imaging findings.

Results: In deep learning analysis with training session, the diagnostic model using training dataset was successfully created. In the validation session, the deep learning diagnostic model revealed sensitivity of 0.83, specificity of 0.83, positive predictive value of 0.88, negative predictive value of 0.77, and diagnostic accuracy of 0.83, while the visual assessment by two radiologists revealed 0.78, 0.5, 0.7, 0.6, and 0.67 (reader 1), and 0.56, 0.67, 0.71, 0.5, and 0.6 (reader 2), respectively. Chi square test showed a significant difference between deep learning- and radiologist-based diagnostic accuracy (reader 1: P = 0.016, reader 2: P = 0.008).

Conclusions: Deep learning diagnostic model with FDG-PET imaging data can be useful as one of supportive tools to determine the HPV status in patients with OPSCC.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ejrad.2020.108936DOI Listing

Publication Analysis

Top Keywords

deep learning
24
diagnostic accuracy
16
diagnostic model
16
learning analysis
12
hpv status
12
human papillomavirus
8
status patients
8
oropharyngeal squamous
8
squamous cell
8
fdg-pet imaging
8

Similar Publications

Sleep stages classification one of the essential factors concerning sleep disorder diagnoses, which can contribute to many functional disease treatments or prevent the primary cognitive risks in daily activities. In this study, A novel method of mapping EEG signals to music is proposed to classify sleep stages. A total of 4.

View Article and Find Full Text PDF

AxonFinder: Automated segmentation of tumor innervating neuronal fibers.

Heliyon

January 2025

Cancer Early Detection Advanced Research Center (CEDAR), Knight Cancer Institute, Oregon Health and Science University, Portland, OR, USA.

Neurosignaling is increasingly recognized as a critical factor in cancer progression, where neuronal innervation of primary tumors contributes to the disease's advancement. This study focuses on segmenting individual axons within the prostate tumor microenvironment, which have been challenging to detect and analyze due to their irregular morphologies. We present a novel deep learning-based approach for the automated segmentation of axons, AxonFinder, leveraging a U-Net model with a ResNet-101 encoder, based on a multiplexed imaging approach.

View Article and Find Full Text PDF

An empirical study of LLaMA3 quantization: from LLMs to MLLMs.

Vis Intell

December 2024

Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zürich, Switzerland.

The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3's capabilities when quantized to low bit-width.

View Article and Find Full Text PDF

Advances in modeling cellular state dynamics: integrating omics data and predictive techniques.

Anim Cells Syst (Seoul)

January 2025

Department of Genome Medicine and Science, Gachon University College of Medicine, Incheon, Republic of Korea.

Dynamic modeling of cellular states has emerged as a pivotal approach for understanding complex biological processes such as cell differentiation, disease progression, and tissue development. This review provides a comprehensive overview of current approaches for modeling cellular state dynamics, focusing on techniques ranging from dynamic or static biomolecular network models to deep learning models. We highlight how these approaches integrated with various omics data such as transcriptomics, and single-cell RNA sequencing could be used to capture and predict cellular behavior and transitions.

View Article and Find Full Text PDF

In weightlifting, quantitative kinematic analysis is essential for evaluating snatch performance. While marker-based (MB) approaches are commonly used, they are impractical for training or competitions. Markerless video-based (VB) systems utilizing deep learning-based pose estimation algorithms could address this issue.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!