Statement Of Problem: Artificial intelligence (AI) has gained significant recent attention and several AI applications, such as the Large Language Models (LLMs) are promising for use in clinical medicine and dentistry. Nevertheless, assessing the performance of LLMs is essential to identify potential inaccuracies or even prevent harmful outcomes.
Purpose: The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to clinical questions in the field of implant dentistry.
Material And Methods: A total of 10 open-ended questions pertinent to prevention and treatment of peri-implant disease were posed to 4 distinct LLMs including ChatGPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot. The answers were evaluated independently by 2 periodontists against scientific evidence for comprehensiveness, scientific accuracy, clarity, and relevance. The LLMs responses received scores ranging from 0 (minimum) to 10 (maximum) points. To assess the intra-evaluator reliability, a re-evaluation of the LLM responses was performed after 2 weeks and Cronbach α and interclass correlation coefficient (ICC) was used (α=.05).
Results: The scores assigned by the examiners on the 2 occasions were not statistically different and each LLM received an average score. Google Gemini Advanced ranked higher than the rest of the LLMs, while Google Gemini scored worst. The difference between Google Gemini Advanced and Google Gemini was statistically significantly different (P=.005).
Conclusions: Dental professionals need to be cautious when using LLMs to access content related to peri-implant diseases. LLMs cannot currently replace dental professionals and caution should be exercised when used in patient care.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.prosdent.2025.02.008 | DOI Listing |
J Med Internet Res
March 2025
Department of Thoracic Surgery, West China Hospital of Sichuan University, Chengdu, China.
Background: Systematic reviews and meta-analyses rely on labor-intensive literature screening. While machine learning offers potential automation, its accuracy remains suboptimal. This raises the question of whether emerging large language models (LLMs) can provide a more accurate and efficient approach.
View Article and Find Full Text PDFIndian J Otolaryngol Head Neck Surg
January 2025
Department of Otolaryngology- Head and Neck Surgery, Rutgers New Jersey Medical School, Newark, New Jersey USA.
Recently, artificial intelligence (AI) platforms such as ChatGPT and Google Gemini have progressed at a rapid pace. To allow for optimal medical outcomes and patient safety, it is crucial that patients have clearly written post-operative instructions. Patients are increasingly turning to AI platforms for medical information.
View Article and Find Full Text PDFJ Prosthet Dent
March 2025
Associate Professor, Department of Preventive Dentistry, Periodontology and Implant Biology, School of Dentistry, Aristotle University of Thessaloniki, Greece; Associate Professor, School of Dentistry, European University Cyprus, Nicosia, Cyprus; and Adjunct Associate Professor, Hamdan bin Mohammed College of Dental Medicine, Mohammed bin Rashid University of Medicine and Health Sciences (MBRU), Dubai, United Arab Emirates.
Statement Of Problem: Artificial intelligence (AI) has gained significant recent attention and several AI applications, such as the Large Language Models (LLMs) are promising for use in clinical medicine and dentistry. Nevertheless, assessing the performance of LLMs is essential to identify potential inaccuracies or even prevent harmful outcomes.
Purpose: The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to clinical questions in the field of implant dentistry.
Front Med (Lausanne)
February 2025
Division of Pulmonary, Critical Care, and Sleep Medicine, School of Medicine, Case Western Reserve University, Cleveland, OH, United States.
Background: Artificial intelligence (AI) is revolutionizing medical education; however, its limitations remain underexplored. This study evaluated the accuracy of three generative AI tools-ChatGPT-4, Copilot, and Google Gemini-in answering multiple-choice questions (MCQ) and short-answer questions (SAQ) related to cardiovascular pharmacology, a key subject in healthcare education.
Methods: Using free versions of each AI tool, we administered 45 MCQs and 30 SAQs across three difficulty levels: easy, intermediate, and advanced.
Vox Sang
March 2025
Department of Haematology, Sultan Qaboos university Hospital, University Medical City, Muscat, Oman.
Background And Objectives: The recent rise of artificial intelligence (AI) chatbots has attracted many users worldwide. However, expert evaluation is essential before relying on them for transfusion medicine (TM)-related information. This study aims to evaluate the performance of AI chatbots for accuracy, correctness, completeness and safety.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!