Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization to unseen generators or post-processing. This can be viewed as an issue of handling out-of-distribution inputs. Forensic detectors can be hardened by the extensive augmentation of the training data or specifically tailored networks. Nevertheless, such precautions only manage but do not remove the risk of prediction failures on inputs that look reasonable to an analyst but in fact are out of the training distribution of the network. With this work, we aim to close this gap with a Bayesian Neural Network (BNN) that provides an additional uncertainty measure to warn an analyst of difficult decisions. More specifically, the BNN learns the task at hand and also detects potential confusion between post-processing and image generator artifacts. Our experiments show that the BNN achieves on-par performance with the state-of-the-art detectors while producing more reliable predictions on out-of-distribution examples.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11122540 | PMC |
http://dx.doi.org/10.3390/jimaging10050110 | DOI Listing |
Comput Biol Med
December 2024
Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands.
Artificial Intelligence (AI) models may fail or suffer from reduced performance when applied to unseen data that differs from the training data distribution, referred to as dataset shift. Automatic detection of out-of-distribution (OOD) data contributes to safe and reliable clinical implementation of AI models. In this study, we propose a recognized OOD detection method that utilizes the Mahalanobis distance (MD) and compare its performance to widely known classical methods.
View Article and Find Full Text PDFInt J Med Inform
December 2024
Department of Medical Informatics, Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam, the Netherlands; Institute of Logic, Language and Computation, University of Amsterdam, the Netherlands; Pacmed, Amsterdam, the Netherlands. Electronic address:
Background: Machine Learning (ML) models often struggle to generalize effectively to data that deviates from the training distribution. This raises significant concerns about the reliability of real-world healthcare systems encountering such inputs known as out-of-distribution (OOD) data. These concerns can be addressed by real-time detection of OOD inputs.
View Article and Find Full Text PDFJ Imaging Inform Med
December 2024
Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, Korea.
The accurate and early detection of vertebral metastases is crucial for improving patient outcomes. Although deep-learning models have shown potential in this area, their lack of prediction reliability and robustness limits their clinical utility. To address these challenges, we propose a novel technique called Ensemble Monte Carlo Dropout (EMCD) for uncertainty quantification (UQ), which combines the Monte Carlo dropout and deep ensembles.
View Article and Find Full Text PDFCell Rep Med
December 2024
Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong 515041, China. Electronic address:
Inability to express the confidence level and detect unseen disease classes limits the clinical implementation of artificial intelligence in the real world. We develop a foundation model with uncertainty estimation (FMUE) to detect 16 retinal conditions on optical coherence tomography (OCT). In the internal test set, FMUE achieves a higher F1 score of 95.
View Article and Find Full Text PDFPhys Rev E
November 2024
Department of Chemical and Biological Engineering, Northwestern University, Evanston, Illinois 60208, USA.
Deep learning models have achieved high performance in a wide range of applications. Recently, however, there have been increasing concerns about the fragility of many of those models to adversarial approaches and out-of-distribution inputs. A way to investigate and potentially address model fragility is to develop the ability to provide interpretability to model predictions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!