Using Multimodal Data to Improve Precision of Inpatient Event Timelines.

Adv Knowl Discov Data Min

National Library of Medicine, Bethesda, MD 20894, USA.

Published: May 2024

Textual data often describe events in time but frequently contain little information about their specific timing, whereas complementary structured data streams may have precise timestamps but may omit important contextual information. We investigate the problem in healthcare, where we produce clinician annotations of discharge summaries, with access to either unimodal (text) or multimodal (text and tabular) data, (i) to determine event interval timings and (ii) to train multimodal language models to locate those events in time. We find our annotation procedures, dashboard tools, and annotations result in high-quality timestamps. Specifically, the multimodal approach produces more precise timestamping, with uncertainties of the lower bound, upper bounds, and duration reduced by 42% (95% CI 34-51%), 36% (95% CI 28-44%), and 13% (95% CI 10-17%), respectively. In the classification version of our task, we find that, trained on our annotations, our multimodal BERT model outperforms unimodal BERT model and Llama-2 encoder-decoder models with improvements in F1 scores for upper (10% and 61%, respectively) and lower bounds (8% and 56%, respectively). The code for the annotation tool and the BERT model is available (link).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11228894PMC
http://dx.doi.org/10.1007/978-981-97-2238-9_25DOI Listing

Publication Analysis

Top Keywords

bert model
12
events time
8
multimodal
5
multimodal data
4
data improve
4
improve precision
4
precision inpatient
4
inpatient event
4
event timelines
4
timelines textual
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!