Introduction: In recent years, artificial intelligence (AI) has seen substantial progress in its utilization, with Chat Generated Pre-Trained Transformer (ChatGPT) is emerging as a popular language model. The purpose of this study was to test the accuracy and reliability of ChatGPT's responses to frequently asked questions (FAQ) pertaining to reverse shoulder arthroplasty (RSA).

Methods: The ten most common FAQs were queried from institution patient education websites. These ten questions were then input into the chatbot during a single session without additional contextual information. The responses were then critically analyzed by two orthopedic surgeons for clarity, accuracy, and the quality of evidence-based information using The Journal of the American Medical Association (JAMA) Benchmark criteria and the DISCERN score. The readability of the responses was analyzed using the Flesch-Kincaid Grade Level.

Results: In response to the ten questions, the average DISCERN score was 44 (range 38-51). Seven responses were classified as fair and three were poor. The JAMA Benchmark criteria score was 0 for all responses. Furthermore, the average Flesch-Kincaid Grade Level was 14.35, which correlates to a college graduate reading level.

Conclusion: Overall, ChatGPT was able to provide fair responses to common patient questions. However, the responses were all written at a college graduate reading level and lacked reliable citations. The readability greatly limits its utility. Thus, adequate patient education should be done by orthopedic surgeons. This study underscores the need for patient education resources that are reliable, accessible, and comprehensible.

Level Of Evidence: IV.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jisako.2024.100323DOI Listing

Publication Analysis

Top Keywords

patient education
12
patient questions
8
reverse shoulder
8
ten questions
8
orthopedic surgeons
8
jama benchmark
8
benchmark criteria
8
discern score
8
flesch-kincaid grade
8
college graduate
8

Similar Publications

Background: Addressing language barriers through accurate interpretation is crucial for providing quality care and establishing trust. While the ability of artificial intelligence (AI) to translate medical documentation has been studied, its role for patient-provider communication is less explored. This review evaluates AI's effectiveness in clinical translation by assessing accuracy, usability, satisfaction, and feedback on its use.

View Article and Find Full Text PDF

Advances in artificial intelligence (AI), machine learning, and publicly accessible language model tools such as ChatGPT-3.5 continue to shape the landscape of modern medicine and patient education. ChatGPT's open access (OA), instant, human-sounding interface capable of carrying discussion on myriad topics makes it a potentially useful resource for patients seeking medical advice.

View Article and Find Full Text PDF

Background And Objective: Colorectal cancer (CRC) has the third highest incidence in the Philippines. Currently, there is a paucity in literature that is focused on the knowledge, attitudes, and perceptions of Filipinos regarding CRC screening. This is the first study in the Philippines that describes this.

View Article and Find Full Text PDF

Objective: To assess the feasibility and acceptability of adapting a psychoeducation course (Body Reprogramming) for severe asthma and finding suggestions for improvement.

Methods: Severe asthma patients were recruited from a single centre and enrolled in an online group-based course. Each course consisted of four sessions: introduction to BR, stress, exercise, and diet.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!