A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Assessing the quality of AI information from ChatGPT regarding oral surgery, preventive dentistry, and oral cancer: An exploration study. | LitMetric

AI Article Synopsis

  • This study evaluated the quality of dental information produced by ChatGPT, focusing on areas like oral surgery, preventive dentistry, and oral cancer, using a standardized scoring system based on accuracy, clarity, and comprehensibility.
  • ChatGPT performed best in preventive dentistry with a score of 4.3/5 but showed lower accuracy in oral surgery (3.9/5) and oral cancer (3.6/5), highlighting gaps in post-operative guidance and risk assessments.
  • The results emphasize the importance of professional oversight when using AI for dental information, advising caution to ensure it's used responsibly and effectively in patient care.

Article Abstract

Aim: Evaluation of the quality of dental information produced by the ChatGPT artificial intelligence language model within the context of oral surgery, preventive dentistry, and oral cancer.

Methodology: This study adopted quantitative methods approach. The experts prepared 50 questions (including dimensions of, risk factors, preventive measures, diagnostic methods, and treatment options) that would be presented to ChatGPT, and its responses were rated for their accuracy, completeness, relevance, clarity or comprehensibility, and possible risks using a standardized rubric. To carry out the assessment of the responses by ChatGPT, a standardized scoring rubric was used. Evaluation process included feedback concerning the strengths, weaknesses, and potential areas of improvement in the responses provided by ChatGPT.

Results: While achieving the highest score for preventive dentistry at 4.3/5 and being able to communicate the complex information coherently, the tool showed lower accuracy for oral surgery and oral cancer, scoring 3.9/5 and 3.6/5, respectively, with several gaps for post-operative instructions, personalized risk assessments, and specialized diagnostic methods. Potential risks, such as a lack of individualized advice, were shown in 53% of the oral cancer and in 40% of the oral surgery. While showing promise in some domains, ChatGPT had important limitations in specialized areas that require nuanced expertise.

Conclusion: The findings point to the need for professional supervision while using AI-generated information and ongoing evaluation as capabilities evolve, for the assurance of responsible implementation in the best interest of patient care.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605724PMC
http://dx.doi.org/10.1016/j.sdentj.2024.09.009DOI Listing

Publication Analysis

Top Keywords

oral surgery
16
preventive dentistry
12
oral cancer
12
oral
8
surgery preventive
8
dentistry oral
8
diagnostic methods
8
chatgpt
5
assessing quality
4
quality chatgpt
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!