Background: Psychiatry is a specialized field of medicine that focuses on the diagnosis, treatment, and prevention of mental health disorders. With advancements in technology and the rise of artificial intelligence (AI), there has been a growing interest in exploring the potential of AI language models systems, such as Chat Generative Pre-training Transformer (ChatGPT), to assist in the field of psychiatry.

Objective: Our study aimed to evaluates the effectiveness, reliability and safeness of ChatGPT in assisting patients with mental health problems, and to assess its potential as a collaborative tool for mental health professionals through a simulated interaction with three distinct imaginary patients.

Methods: Three imaginary patient scenarios (cases A, B, and C) were created, representing different mental health problems. All three patients present with, and seek to eliminate, the same chief complaint (i.e., difficulty falling asleep and waking up frequently during the night in the last 2°weeks). ChatGPT was engaged as a virtual psychiatric assistant to provide responses and treatment recommendations.

Results: In case A, the recommendations were relatively appropriate (albeit non-specific), and could potentially be beneficial for both users and clinicians. However, as complexity of clinical cases increased (cases B and C), the information and recommendations generated by ChatGPT became inappropriate, even dangerous; and the limitations of the program became more glaring. The main strengths of ChatGPT lie in its ability to provide quick responses to user queries and to simulate empathy. One notable limitation is ChatGPT inability to interact with users to collect further information relevant to the diagnosis and management of a patient's clinical condition. Another serious limitation is ChatGPT inability to use critical thinking and clinical judgment to drive patient's management.

Conclusion: As for July 2023, ChatGPT failed to give the simple medical advice given certain clinical scenarios. This supports that the quality of ChatGPT-generated content is still far from being a guide for users and professionals to provide accurate mental health information. It remains, therefore, premature to conclude on the usefulness and safety of ChatGPT in mental health practice.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10794665PMC
http://dx.doi.org/10.3389/fpsyt.2023.1277756DOI Listing

Publication Analysis

Top Keywords

mental health
28
chatgpt
10
health problems
8
limitation chatgpt
8
chatgpt inability
8
mental
7
health
7
chatgpt ready
4
ready providing
4
providing mental
4

Similar Publications

The role of craving in opioid use disorder (OUD) has been well established with respect to heroin but less so with prescription opioids. This pilot study, conducted in 18 treatment-seeking patients with prescription OUD and 18 healthy volunteers, assessed spontaneous (in the moment) and cue-induced craving and their relationship to depression and anxiety. Patients (vs.

View Article and Find Full Text PDF

Chronic insomnia is one of the most common health problems among veterans and can significantly impact health, function, and quality of life. Brief behavioral treatment for insomnia (BBTI), an adaptation of cognitive behavioral therapy for insomnia (CBT-I), was developed to help increase access to care outside of specialty settings. However, training providers alone is rarely sufficient, and implementation strategies are needed for successful uptake, adoption, and sustainable delivery of care.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!