Accuracy of chatbot-generated references in the field of oral oncology: Exercising caution.

Oral Dis

Research and Developmental Cell, Dr. D.Y. Patil Vidyapeeth, Pune, Maharashtra, India.

Published: October 2024

Download full-text PDF

Source
http://dx.doi.org/10.1111/odi.14880DOI Listing

Publication Analysis

Top Keywords

accuracy chatbot-generated
4
chatbot-generated references
4
references field
4
field oral
4
oral oncology
4
oncology exercising
4
exercising caution
4
accuracy
1
references
1
field
1

Similar Publications

Purpose: To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.

Methods: Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs.

View Article and Find Full Text PDF

Background: Interactive artificial intelligence tools such as ChatGPT have gained popularity, yet little is known about their reliability as a reference tool for healthcare-related information for healthcare providers and trainees. The objective of this study was to assess the consistency, quality, and accuracy of the responses generated by ChatGPT on healthcare-related inquiries.

Methods: A total of 18 open-ended questions including six questions in three defined clinical areas (2 each to address "what", "why", and "how", respectively) were submitted to ChatGPT v3.

View Article and Find Full Text PDF

ChatGPT and Google Bard™ are popular artificial intelligence chatbots with utility for patients, including those undergoing aesthetic facial plastic surgery. To compare the accuracy and readability of chatbot-generated responses to patient education questions regarding aesthetic facial plastic surgery using a response accuracy scale and readability testing. ChatGPT and Google Bard™ were asked 28 identical questions using four prompts: none, patient friendly, eighth-grade level, and references.

View Article and Find Full Text PDF

Medical practitioners are increasingly using artificial intelligence (AI) chatbots for easier and faster access to information. To our knowledge, the accuracy and availability of AI-generated chemotherapy protocols has not yet been studied. Nine simulated cancer patient cases were designed and AI chatbots, ChatGPT version 3.

View Article and Find Full Text PDF

Purpose: Patients are using online search modalities to learn about their eye health. While Google remains the most popular search engine, the use of large language models (LLMs) like ChatGPT has increased. Cataract surgery is the most common surgical procedure in the US, and there is limited data on the quality of online information that populates after searches related to cataract surgery on search engines such as Google and LLM platforms such as ChatGPT.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!