Background: Prediction models for atrial fibrillation (AF) may enable earlier detection and guideline-directed treatment decisions. However, model bias may lead to inaccurate predictions and unintended consequences.
Objective: The purpose of this study was to validate, assess bias, and improve generalizability of "UNAFIED-10," a 2-year, 10-variable predictive model of undiagnosed AF in a national data set (originally developed using the Indiana Network for Patient Care regional data).
The increasing torrents of health AI innovations hold promise for facilitating the delivery of patient-centered care. Yet the enablement and adoption of AI innovations in the healthcare and life science industries can be challenging with the rising concerns of AI risks and the potential harms to health equity. This paper describes Ethicara, a system that enables health AI risk assessment for responsible AI model development.
View Article and Find Full Text PDF