Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Purpose: Artificial intelligence (AI) is a set of systems or combinations of algorithms, which mimic human intelligence. ChatGPT is software with artificial intelligence which was recently developed by OpenAI. One of its potential uses could be to consult the information about pathologies and treatments. Our objective was to assess the quality of the information provided by AI like ChatGPT and establish if it is a secure source of information for patients.
Methods: Questions about bladder cancer, prostate cancer, renal cancer, benign prostatic hypertrophy (BPH), and urinary stones were queried through ChatGPT 4.0. Two urologists analysed the responses provided by ChatGPT using DISCERN questionary and a brief instrument for evaluating the quality of informed consent documents.
Results: The overall information provided in all pathologies was well-balanced. In each pathology was explained its anatomical location, affected population and a description of the symptoms. It concluded with the established risk factors and possible treatment. All treatment answers had a moderate quality score with DISCERN (3 of 5 points). The answers about surgical options contain the recovery time, type of anaesthesia, and potential complications. After analysing all the responses related to each disease, all pathologies except BPH achieved a DISCERN score of 4.
Conclusions: ChatGPT information should be used with caution since the chatbot does not disclose the sources of information and may contain bias even with simple questions related to the basics of urologic diseases.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s00345-023-04563-0 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!