Artificial intelligence (AI) has recently become a trending tool and topic regarding productivity especially with publicly available free services such as ChatGPT and Bard. In this report, we investigate if two widely available chatbots chatGPT and Bard, are able to show consistent accurate responses for the best imaging modality for urologic clinical situations and if they are in line with American College of Radiology (ACR) Appropriateness Criteria (AC). All clinical scenarios provided by the ACR were inputted into ChatGPT and Bard with result compared to the ACR AC and recorded. Both chatbots had an appropriate imaging modality rate of of 62% and no significant difference in proportion of correct imaging modality was found overall between the two services (p>0.05). The results of our study found that both ChatGPT and Bard are similar in their ability to suggest the most appropriate imaging modality in a variety of urologic scenarios based on ACR AC criteria. Nonetheless, both chatbots lack consistent accuracy and further development is necessary for implementation in clinical settings. For proper use of these AI services in clinical decision making, further developments are needed to improve the workflow of physicians.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1067/j.cpradiol.2023.10.022 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!