Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1056/NEJMcpc2300906 | DOI Listing |
Developing AI tools that preserve fairness is of critical importance, specifically in high-stakes applications such as those in healthcare. However, health AI models' overall prediction performance is often prioritized over the possible biases such models could have. In this study, we show one possible approach to mitigate bias concerns by having healthcare institutions collaborate through a federated learning paradigm (FL; which is a popular choice in healthcare settings).
View Article and Find Full Text PDFN Engl J Med
July 2023
From the Department of Medicine, University of Maryland School of Medicine, Baltimore (B.A.M.); and the Department of Medicine, Brigham and Women's Hospital (B.A.M.), the Departments of Medicine (B.A.M., A.S.W., D.M.D., W.Z.), Radiology (A.S.S.-B.), and Pathology (S.G.S.), Harvard Medical School, and the Departments of Medicine (A.S.W., D.M.D., W.Z.), Radiology (A.S.S.-B.), and Pathology (S.G.S.), Massachusetts General Hospital - all in Boston.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!