Background And Purpose: Robustness against input data perturbations is essential for deploying deep-learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep-learning models' prediction errors. Testing deep-learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness.
View Article and Find Full Text PDFBackground: Hematoma expansion (HE) following an intracerebral hemorrhage (ICH) is a modifiable risk factor and a treatment target. We examined the association of HE with neurological deterioration (ND), functional outcome, and mortality based on the time gap from onset to baseline CT.
Methods: We included 567 consecutive patients with supratentorial ICH and baseline head CT within 24 h of onset.