Background: This study assessed the consistency and accuracy of responses provided by two artificial intelligence (AI) applications, ChatGPT and Google Bard (Gemini), to questions related to dental trauma.
Materials And Methods: Based on the International Association of Dental Traumatology guidelines, 25 dichotomous (yes/no) questions were posed to ChatGPT and Google Bard over 10 days. The responses were recorded and compared with the correct answers. Statistical analyses, including Fleiss kappa, were conducted to determine the agreement and consistency of the responses.
Results: Analysis of 4500 responses revealed that both applications provided correct answers to 57.5% of the questions. Google Bard demonstrated a moderate level of agreement, with varying rates of incorrect answers and referrals to physicians.
Conclusions: Although ChatGPT and Google Bard are potential knowledge resources, their consistency and accuracy in responding to dental trauma queries remain limited. Further research involving specially trained AI models in endodontics is warranted to assess their suitability for clinical use.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/edt.12965 | DOI Listing |
AJOG Glob Rep
February 2025
University of Texas Southwestern, Dallas, TX (Cohen, Ho, McIntire, Smith, and Kho).
Introduction: The use of generative artificial intelligence (AI) has begun to permeate most industries, including medicine, and patients will inevitably start using these large language model (LLM) chatbots as a modality for education. As healthcare information technology evolves, it is imperative to evaluate chatbots and the accuracy of the information they provide to patients and to determine if there is variability between them.
Objective: This study aimed to evaluate the accuracy and comprehensiveness of three chatbots in addressing questions related to endometriosis and determine the level of variability between them.
BMC Oral Health
January 2025
Department of Endodontics, Faculty of Dentistry, Marmara University, Başıbüyük, Başıbüyük Yolu Marmara Üniversitesi Başıbüyük Sağlık Yerleşkesi 9/3, Başıbüyük - Maltepe, PO Box: 34854, İstanbul, Turkey.
Introduction: The integration of artificial intelligence (AI) technologies in healthcare is revolutionizing the workflows of healthcare professionals, enabling faster and more accurate patient treatment. This study aims to evaluate the accuracy of responses provided by different AI chatbots to questions that dentists might ask regarding regenerative endodontic treatment (RET), a procedure that shows promising biological healing potential.
Methods: A total of 23 questions related to RET procedures were developed based on the American Association of Endodontists (AAE) 2022 guidelines.
Background: The COVID-19 pandemic has significantly strained healthcare systems globally, leading to an overwhelming influx of patients and exacerbating resource limitations. Concurrently, an "infodemic" of misinformation, particularly prevalent in women's health, has emerged. This challenge has been pivotal for healthcare providers, especially gynecologists and obstetricians, in managing pregnant women's health.
View Article and Find Full Text PDFJMIR Dermatol
January 2025
Skin Refinery PLLC, Spokane, WA, United States.
Our team explored the utility of unpaid versions of 3 artificial intelligence chatbots in offering patient-facing responses to questions about 5 common dermatological diagnoses, and highlighted the strengths and limitations of different artificial intelligence chatbots, while demonstrating how chatbots presented the most potential in tandem with dermatologists' diagnosis.
View Article and Find Full Text PDFPurpose: Caregivers in pediatric oncology need accurate and understandable information about their child's condition, treatment, and side effects. This study assesses the performance of publicly accessible large language model (LLM)-supported tools in providing valuable and reliable information to caregivers of children with cancer.
Methods: In this cross-sectional study, we evaluated the performance of the four LLM-supported tools-ChatGPT (GPT-4), Google Bard (Gemini Pro), Microsoft Bing Chat, and Google SGE-against a set of frequently asked questions (FAQs) derived from the Children's Oncology Group Family Handbook and expert input (In total, 26 FAQs and 104 generated responses).
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!