In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.
View Article and Find Full Text PDFPublicly available audio data presents a unique opportunity for the development of digital health technologies with large language models (LLMs). In this study, YouTube was mined to collect audio data from individuals with self-declared positive COVID-19 tests as well as those with other upper respiratory infections (URI) and healthy subjects discussing a diverse range of topics. The resulting dataset was transcribed with the Whisper model and used to assess the capacity of LLMs for detecting self-reported COVID-19 cases and performing variant classification.
View Article and Find Full Text PDFBackground: Excessive electronic health record (EHR) alerts reduce the salience of actionable alerts. Little is known about the frequency of interruptive alerts across health systems and how the choice of metric affects which users appear to have the highest alert burden.
Objective: (1) Analyze alert burden by alert type, care setting, provider type, and individual provider across 6 pediatric health systems.
With growth in consumer health technologies, patients and caregivers have become increasingly involved in their health and medical care. Such health-related engagement often occurs at home. Pregnancy is a common condition and, for many women, their first exposure to health management practices.
View Article and Find Full Text PDF