Continuous Learning AI in Radiology: Implementation Principles and Early Applications.

Radiology

From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, FND-210, Boston, MA 02114-2698 (O.S.P., J.A.B.); International Society for Strategic Studies in Radiology (IS3R), Vienna, Austria (M.D., D.R.E., C.J.H., S.O.S., J.A.B.); Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria (G.L., C.J.H.); Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Boston, Mass (G.L.); Department of Radiology, Charité-Universitätsmedizin, Berlin, Germany (M.D.); Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Calif (D.R.E.); and Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany (S.O.S.).

Published: October 2020

Artificial intelligence (AI) is becoming increasingly present in radiology and health care. This expansion is driven by the principal AI strengths: automation, accuracy, and objectivity. However, as radiology AI matures to become fully integrated into the daily radiology routine, it needs to go beyond replicating static models, toward discovering new knowledge from the data and environments around it. Continuous learning AI presents the next substantial step in this direction and brings a new set of opportunities and challenges. Herein, the authors discuss the main concepts and requirements for implementing continuous AI in radiology and illustrate them with examples from emerging applications.

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2020200038DOI Listing

Publication Analysis

Top Keywords

continuous learning
8
radiology
5
learning radiology
4
radiology implementation
4
implementation principles
4
principles early
4
early applications
4
applications artificial
4
artificial intelligence
4
intelligence increasingly
4

Similar Publications

MetAssimulo 2.0: a web app for simulating realistic 1D & 2D Metabolomic 1H NMR spectra.

Bioinformatics

January 2025

Section of Bioinformatics, Division of Systems Medicine, Department of Metabolism, Digestion and Reproduction, Faculty of Medicine, Imperial College London, London, W12 0NN, United Kingdom.

Unlabelled: Metabolomics extensively utilizes Nuclear Magnetic Resonance (NMR) spectroscopy due to its excellent reproducibility and high throughput. Both one-dimensional (1D) and two-dimensional (2D) NMR spectra provide crucial information for metabolite annotation and quantification, yet present complex overlapping patterns which may require sophisticated machine learning algorithms to decipher. Unfortunately, the limited availability of labeled spectra can hamper application of machine learning, especially deep learning algorithms which require large amounts of labelled data.

View Article and Find Full Text PDF

Background/objectives: Malnutrition and sarcopenia are interrelated health concerns among the elderly. Each condition is associated with increased mortality, morbidity, rehospitalization rates, longer hospital stays, higher healthcare costs, and reduced quality of life. Their combination leads to the development of "Malnutrition-Sarcopenia Syndrome" (MSS), characterized by reductions in body weight, muscle mass, strength, and physical function.

View Article and Find Full Text PDF

Artificial intelligence (AI), particularly through advanced large language model (LLM) technologies, is reshaping coal mine safety assessment methods with its powerful cognitive capabilities. Given the dynamic, multi-source, and heterogeneous characteristics of data in typical mining scenarios, traditional manual assessment methods are limited in their information processing capacity and cost-effectiveness. This study addresses these challenges by proposing an embodied intelligent system for mine safety assessment based on multi-level large language models (LLMs) for multi-source sensor data.

View Article and Find Full Text PDF

In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability.

Sensors (Basel)

January 2025

Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada.

Deep learning (DL)-based Human Activity Recognition (HAR) using wearable inertial measurement unit (IMU) sensors can revolutionize continuous health monitoring and early disease prediction. However, most DL HAR models are untested in their robustness to real-world variability, as they are trained on limited lab-controlled data. In this study, we isolated and analyzed the effects of the subject, device, position, and orientation variabilities on DL HAR models using the HARVAR and REALDISP datasets.

View Article and Find Full Text PDF

Sensor-based gesture recognition on mobile devices is critical to human-computer interaction, enabling intuitive user input for various applications. However, current approaches often rely on server-based retraining whenever new gestures are introduced, incurring substantial energy consumption and latency due to frequent data transmission. To address these limitations, we present the first on-device continual learning framework for gesture recognition.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!