Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine's understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10423547PMC
http://dx.doi.org/10.1136/jme-2022-108447DOI Listing

Publication Analysis

Top Keywords

artificial intelligence
8
clinical judgements
8
evidence ethics
4
ethics promise
4
promise artificial
4
intelligence psychiatry
4
psychiatry researchers
4
researchers studying
4
studying artificial
4
intelligence better
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!