Introduction: As human medical diagnostic expertise is scarcely available, especially in veterinary care, artificial intelligence (AI) has been increasingly used as a remedy. AI's promise comes from improving human diagnostics or providing good diagnostics at lower cost, increasing access. This study analyzed the diagnostic performance of a widely used AI radiology software vs. veterinary radiologists in interpreting canine and feline radiographs. We aimed to establish whether the performance of commonly used AI matches the performance of a typical radiologist and thus can be reliably used. Secondly, we try to identify in which cases AI is effective.
Methods: Fifty canine and feline radiographic studies in DICOM format were anonymized and reported by 11 board-certified veterinary radiologists (ECVDI or ACVR) and processed with commercial and widely used AI software dedicated to small animal radiography (SignalRAY, SignalPET Dallas, TX, USA). The AI software used a deep-learning algorithm and returned a coded or diagnosis for each finding in the study. The radiologists provided a written report in English. All reports' findings were coded into categories matching the codes from the AI software and classified as or . The sensitivity, specificity, and accuracy of each radiologist and the AI software were calculated. The variance in agreement between each radiologist and the AI software was measured to calculate the ambiguity of each radiological finding.
Results: AI matched the best radiologist in accuracy and was more specific but less sensitive than human radiologists. AI did better than the median radiologist overall in low- and high-ambiguity cases. In high-ambiguity cases, AI's accuracy remained high, though it was less effective at detecting abnormalities but better at identifying normal findings. The study confirmed AI's reliability, especially in low-ambiguity scenarios.
Conclusion: Our findings suggest that AI performs almost as well as the best veterinary radiologist in all settings of descriptive radiographic findings. However, its strengths lie more in confirming normality than detecting abnormalities, and it does not provide differential diagnoses. Therefore, the broader use of AI could reliably increase diagnostic availability but requires further human input. Given the unique strengths of human experts and AI and the differences in sensitivity vs. specificity and low-ambiguity vs. high-ambiguity settings, AI will likely complement rather than replace human experts.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11886591 | PMC |
http://dx.doi.org/10.3389/fvets.2025.1502790 | DOI Listing |
Front Vet Sci
February 2025
Royal (Dick) School of Veterinary Studies and Roslin Institute, The University of Edinburgh, Edinburgh, United Kingdom.
Introduction: As human medical diagnostic expertise is scarcely available, especially in veterinary care, artificial intelligence (AI) has been increasingly used as a remedy. AI's promise comes from improving human diagnostics or providing good diagnostics at lower cost, increasing access. This study analyzed the diagnostic performance of a widely used AI radiology software vs.
View Article and Find Full Text PDFVet Comp Oncol
March 2025
Department of Veterinary Medicine and Surgery, University of Missouri College of Veterinary Medicine, Columbia, Missouri, USA.
Cross-sectional imaging may be used to characterise the location and extent of colorectal mesenchymal tumours (CRMTs). Given the anticipated variation in tumour behaviour and varying morbidity based on surgical margins, a reliable, non-invasive means of predicting malignant potential could facilitate case management. The purpose of this multi-institutional, retrospective study was to determine the diagnostic accuracy of contrast-enhanced CT for distinguishing benign and malignant CRMTs.
View Article and Find Full Text PDFJ Vet Intern Med
February 2025
Department of Small Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, Texas, USA.
Background: The comparative effectiveness of radiotherapy and surgery for treating intracranial meningioma is unknown.
Objectives: To compare survival after treatment of suspected intracranial meningioma by either surgery or radiotherapy.
Animals: Two hundred eighty-five companion dogs with suspected intracranial meningiomas presenting to 11 specialty clinics in three countries.
Sci Rep
February 2025
Founder, Hawkcell, 69280, Marcy-L'Étoile, France.
Magnetic resonance imaging (MRI) has changed veterinary diagnosis but its long-sequence time can be problematic, especially because animals need to be sedated during the exam. Unfortunately, shorter scan times implies a fall in overall image quality and diagnosis reliability. Therefore, we developed a Generative Adversarial Net-based denoising algorithm called HawkAI.
View Article and Find Full Text PDFVet Radiol Ultrasound
March 2025
Hawkcell, Lyon, France.
In this analytical cross-sectional method comparison study, we evaluated brain MR images in 30 dogs and cats with and without using a DICOM-based deep-learning (DL) denoising algorithm developed specifically for veterinary patients. Quantitative comparison was performed by measuring signal-to-noise (SNR) and contrast-to-noise ratios (CNR) on the same T2-weighted (T2W), T2-FLAIR, and Gradient Echo (GRE) MR brain images in each patient (native images and after denoising) in identical regions of interest. Qualitative comparisons were then conducted: three experienced veterinary radiologists independently evaluated each patient's T2W, T2-FLAIR, and GRE image series.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!