Objective: To evaluate the repeatability of AI-based automatic measurement of vertebral and cardiovascular markers on low-dose chest CT.
Methods: We included participants of the population-based Imaging in Lifelines (ImaLife) study with low-dose chest CT at baseline and 3-4 month follow-up. An AI system (AI-Rad Companion chest CT prototype) performed automatic segmentation and quantification of vertebral height and density, aortic diameters, heart volume (cardiac chambers plus pericardial fat), and coronary artery calcium volume (CACV).
Various healthcare domains have witnessed successful preliminary implementation of artificial intelligence (AI) solutions, including radiology, though limited generalizability hinders their widespread adoption. Currently, most research groups and industry have limited access to the data needed for external validation studies. The creation and accessibility of benchmark datasets to validate such solutions represents a critical step towards generalizability, for which an array of aspects ranging from preprocessing to regulatory issues and biostatistical principles come into play.
View Article and Find Full Text PDFThe advent of computer vision technology and increased usage of video cameras in clinical settings have facilitated advancements in movement disorder analysis. This review investigated these advancements in terms of providing practical, low-cost solutions for the diagnosis and analysis of movement disorders, such as Parkinson's disease, ataxia, dyskinesia, and Tourette syndrome. Traditional diagnostic methods for movement disorders are typically reliant on the subjective assessment of motor symptoms, which poses inherent challenges.
View Article and Find Full Text PDFPurpose: Conventional normal tissue complication probability (NTCP) models for patients with head and neck cancer are typically based on single-value variables, which, for radiation-induced xerostomia, are baseline xerostomia and mean salivary gland doses. This study aimed to improve the prediction of late xerostomia by using 3-dimensional information from radiation dose distributions, computed tomography imaging, organ-at-risk segmentations, and clinical variables with deep learning (DL).
Methods And Materials: An international cohort of 1208 patients with head and neck cancer from 2 institutes was used to train and twice validate DL models (deep convolutional neural network, EfficientNet-v2, and ResNet) with 3-dimensional dose distribution, computed tomography scan, organ-at-risk segmentations, baseline xerostomia score, sex, and age as input.
Federated learning enables training models on distributed, privacy-sensitive medical imaging data. However, data heterogeneity across participating institutions leads to reduced model performance and fairness issues, especially for underrepresented datasets. To address these challenges, we propose leveraging the multi-head attention mechanism in Vision Transformers to align the representations of heterogeneous data across clients.
View Article and Find Full Text PDFBackground And Purpose: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS).
Materials And Methods: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS.
Background: The different tumor appearance of head and neck cancer across imaging modalities, scanners, and acquisition parameters accounts for the highly subjective nature of the manual tumor segmentation task. The variability of the manual contours is one of the causes of the lack of generalizability and the suboptimal performance of deep learning (DL) based tumor auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted tumor probabilities for each PET-CT voxel in the form of a probability map instead of one fixed contour.
View Article and Find Full Text PDFBackground: Emphysema influences the appearance of lung tissue in computed tomography (CT). We evaluated whether this affects lung nodule detection by artificial intelligence (AI) and human readers (HR).
Methods: Individuals were selected from the "Lifelines" cohort who had undergone low-dose chest CT.
Objective: To systematically review radiomic feature reproducibility and model validation strategies in recent studies dealing with CT and MRI radiomics of bone and soft-tissue sarcomas, thus updating a previous version of this review which included studies published up to 2020.
Methods: A literature search was conducted on EMBASE and PubMed databases for papers published between January 2021 and March 2023. Data regarding radiomic feature reproducibility and model validation strategies were extracted and analyzed.
Deep learning has proven to be highly effective in diagnosing COVID-19; however, its efficacy is contingent upon the availability of extensive data for model training. The data sharing among hospitals, which is crucial for training robust models, is often restricted by privacy regulations. Federated learning (FL) emerges as a solution by enabling model training across multiple hospitals while preserving data privacy.
View Article and Find Full Text PDFPurpose: To propose a new quality scoring tool, METhodological RadiomICs Score (METRICS), to assess and improve research quality of radiomics studies.
Methods: We conducted an online modified Delphi study with a group of international experts. It was performed in three consecutive stages: Stage#1, item preparation; Stage#2, panel discussion among EuSoMII Auditing Group members to identify the items to be voted; and Stage#3, four rounds of the modified Delphi exercise by panelists to determine the items eligible for the METRICS and their weights.
Objectives: To present a framework to develop and implement a fast-track artificial intelligence (AI) curriculum into an existing radiology residency program, with the potential to prepare a new generation of AI conscious radiologists.
Methods: The AI-curriculum framework comprises five sequential steps: (1) forming a team of AI experts, (2) assessing the residents' knowledge level and needs, (3) defining learning objectives, (4) matching these objectives with effective teaching strategies, and finally (5) implementing and evaluating the pilot. Following these steps, a multidisciplinary team of AI engineers, radiologists, and radiology residents designed a 3-day program, including didactic lectures, hands-on laboratory sessions, and group discussions with experts to enhance AI understanding.
Background: During lung cancer screening, indeterminate pulmonary nodules (IPNs) are a frequent finding. We aim to predict whether IPNs are resolving or non-resolving to reduce follow-up examinations, using machine learning (ML) models. We incorporated dedicated techniques to enhance prediction explainability.
View Article and Find Full Text PDFBackground And Purpose: To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy.
Methods And Materials: The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder).
Background And Objective: Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly.
View Article and Find Full Text PDFArtificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved.
View Article and Find Full Text PDFBackground: Accurate breast density evaluation allows for more precise risk estimation but suffers from high inter-observer variability.
Purpose: To evaluate the feasibility of reducing inter-observer variability of breast density assessment through artificial intelligence (AI) assisted interpretation.
Study Type: Retrospective.
Objectives: To investigate the intra- and inter-rater reliability of the total radiomics quality score (RQS) and the reproducibility of individual RQS items' score in a large multireader study.
Methods: Nine raters with different backgrounds were randomly assigned to three groups based on their proficiency with RQS utilization: Groups 1 and 2 represented the inter-rater reliability groups with or without prior training in RQS, respectively; group 3 represented the intra-rater reliability group. Thirty-three original research papers on radiomics were evaluated by raters of groups 1 and 2.
Objectives: To evaluate the performance of artificial intelligence (AI) software for automatic thoracic aortic diameter assessment in a heterogeneous cohort with low-dose, non-contrast chest computed tomography (CT).
Materials And Methods: Participants of the Imaging in Lifelines (ImaLife) study who underwent low-dose, non-contrast chest CT (August 2017-May 2022) were included using random samples of 80 participants <50y, ≥80y, and with thoracic aortic diameter ≥40 mm. AI-based aortic diameters at eight guideline compliant positions were compared with manual measurements.
Objectives: To develop a deep learning-based method for contrast-enhanced breast lesion detection in ultrafast screening MRI.
Materials And Methods: A total of 837 breast MRI exams of 488 consecutive patients were included. Lesion's location was independently annotated in the maximum intensity projection (MIP) image of the last time-resolved angiography with stochastic trajectories (TWIST) sequence for each individual breast, resulting in 265 lesions (190 benign, 75 malignant) in 163 breasts (133 women).
Background: Deep learning is an important means to realize the automatic detection, segmentation, and classification of pulmonary nodules in computed tomography (CT) images. An entire CT scan cannot directly be used by deep learning models due to image size, image format, image dimensionality, and other factors. Between the acquisition of the CT scan and feeding the data into the deep learning model, there are several steps including data use permission, data access and download, data annotation, and data preprocessing.
View Article and Find Full Text PDFIEEE Trans Med Imaging
January 2024
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE).
View Article and Find Full Text PDF