This study explores how artificial intelligence (AI) chatbots can identify common errors in clinical laboratories through a series of case scenarios and questions focused on pre-analytical, analytical, and postanalytical processes.
Four chatbots were assessed on their ability to accurately answer 60 questions, with the accuracy of their responses evaluated by three independent laboratory experts.
The findings showed that AI models like CopyAI and ChatGPT v4.0 outperformed ChatGPT-3.5, suggesting that with further training and validation, AI could play a significant role in improving data accuracy in clinical settings.