Simulated Misuse of Large Language Models and Clinical Credit Systems.

medRxiv

Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, Maryland, USA.

Published: September 2024

Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources via unfair, unjust, or inaccurate criteria. For example, a social credit system uses big data to assess "trustworthiness" in society, penalizing those who score poorly based on evaluation metrics defined only by a power structure (e.g., a corporate entity or governing body). Such a system may be amplified by powerful LLMs which can evaluate individuals based on multimodal data - financial transactions, internet activity, and other behavioral inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty or other rights via a "clinical credit system", which may include limiting access to care. The results of this study show that LLMs may be biased in favor of collective or systemic benefit over protecting individual rights, potentially enabling this type of future misuse. Moreover, experiments in this report simulate how clinical datasets might be exploited with current LLMs, demonstrating the urgency of addressing these ethical dangers. Finally, strategies are proposed to mitigate the risk of developing large AI models for healthcare.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11030492PMC
http://dx.doi.org/10.1101/2024.04.10.24305470DOI Listing

Publication Analysis

Top Keywords

large language
8
language models
8
llms
5
simulated misuse
4
misuse large
4
models clinical
4
clinical credit
4
credit systems
4
systems large
4
models llms
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!