Deep Learning Methods for Detecting Side Effects of Cancer Chemotherapies Reported in a Remote Monitoring Web Application.

Stud Health Technol Inform

Service de médecine interne, Hôpital Antoine-Béclère, Assistance Publique Hôpitaux de Paris, Clamart, France.

Published: May 2022

AI Article Synopsis

Article Abstract

The objective of our work was to develop deep learning methods for extracting and normalizing patient-reported free-text side effects in a cancer chemotherapy side effect remote monitoring web application. The F-measure was 0.79 for the medical concept extraction model and 0.85 for the negation extraction model (Bi-LSTM-CRF). The next step was the normalization. Of the 1040 unique concepts in the dataset, 62, 3% scored 1 (corresponding to a perfect match with an UMLS CUI). These methods need to be improved to allow their integration into home telemonitoring devices for automatic notification of the hospital oncologists.

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI220616DOI Listing

Publication Analysis

Top Keywords

deep learning
8
learning methods
8
side effects
8
effects cancer
8
remote monitoring
8
monitoring web
8
web application
8
extraction model
8
methods detecting
4
detecting side
4

Similar Publications

Purpose: To develop an artificial intelligence (AI) algorithm for automated measurements of spinopelvic parameters on lateral radiographs and compare its performance to multiple experienced radiologists and surgeons.

Methods: On lateral full-spine radiographs of 295 consecutive patients, a two-staged region-based convolutional neural network (R-CNN) was trained to detect anatomical landmarks and calculate thoracic kyphosis (TK), lumbar lordosis (LL), sacral slope (SS), and sagittal vertical axis (SVA). Performance was evaluated on 65 radiographs not used for training, which were measured independently by 6 readers (3 radiologists, 3 surgeons), and the median per measurement was set as the reference standard.

View Article and Find Full Text PDF

A multicenter study of neurofibromatosis type 1 utilizing deep learning for whole body tumor identification.

NPJ Digit Med

January 2025

Neurofibromatosis Type 1 Center and Laboratory for Neurofibromatosis Type 1 Research, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.

Deep-learning models have shown promise in differentiating between benign and malignant lesions. Previous studies have primarily focused on specific anatomical regions, overlooking tumors occurring throughout the body with highly heterogeneous whole-body backgrounds. Using neurofibromatosis type 1 (NF1) as an example, this study developed highly accurate MRI-based deep-learning models for the early automated screening of malignant peripheral nerve sheath tumors (MPNSTs) against complex whole-body background.

View Article and Find Full Text PDF

We aimed to build a robust classifier for the MGMT methylation status of glioblastoma in multiparametric MRI. We focused on multi-habitat deep image descriptors as our basic focus. A subset of the BRATS 2021 MGMT methylation dataset containing both MGMT class labels and segmentation masks was used.

View Article and Find Full Text PDF

Exploring the potential of advanced artificial intelligence technology in predicting microsatellite instability (MSI) and Ki-67 expression of endometrial cancer (EC) is highly significant. This study aimed to develop a novel hybrid radiomics approach integrating multiparametric magnetic resonance imaging (MRI), deep learning, and multichannel image analysis for predicting MSI and Ki-67 status. A retrospective study included 156 EC patients who were subsequently categorized into MSI and Ki-67 groups.

View Article and Find Full Text PDF

In order to solve the limitations of flipped classroom in personalized teaching and interactive effect improvement, this paper designs a new model of flipped classroom in colleges and universities based on Virtual Reality (VR) by combining the algorithm of Contrastive Language-Image Pre-Training (CLIP). Through cross-modal data fusion, the model deeply combines students' operation behavior with teaching content, and improves teaching effect through intelligent feedback mechanism. The test data shows that the similarity between video and image modes reaches 0.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!