A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Evaluating the fairness and accuracy of machine learning-based predictions of clinical outcomes after anatomic and reverse total shoulder arthroplasty. | LitMetric

AI Article Synopsis

  • This study investigates the fairness of machine learning-based clinical decision support tools (CDSTs) used in predicting outcomes for patients undergoing shoulder surgeries based on demographic factors like ethnicity, sex, and age.
  • Although the accuracy of predictions varied slightly across these demographic groups, results indicated that the biases in predictions were minimal, with no significant unfairness observed in predicting minimal clinically important differences (MCID) and only a tiny percentage demonstrating bias in predicting substantial clinical benefits.
  • The findings suggest that the CDSTs used for shoulder arthroplasties are generally fair, posing little risk of bias that could lead to unequal treatment for different patient demographics.

Article Abstract

Background: Machine learning (ML)-based clinical decision support tools (CDSTs) make personalized predictions for different treatments; by comparing predictions of multiple treatments, these tools can be used to optimize decision making for a particular patient. However, CDST prediction accuracy varies for different patients and also for different treatment options. If these differences are sufficiently large and consistent for a particular subcohort of patients, then that bias may result in those patients not receiving a particular treatment. Such level of bias would deem the CDST "unfair." The purpose of this study is to evaluate the "fairness" of ML CDST-based clinical outcomes predictions after anatomic (aTSA) and reverse total shoulder arthroplasty (rTSA) for patients of different demographic attributes.

Methods: Clinical data from 8280 shoulder arthroplasty patients with 19,249 postoperative visits was used to evaluate the prediction fairness and accuracy associated with the following patient demographic attributes: ethnicity, sex, and age at the time of surgery. Performance of clinical outcome and range of motion regression predictions were quantified by the mean absolute error (MAE) and performance of minimal clinically important difference (MCID) and substantial clinical benefit classification predictions were quantified by accuracy, sensitivity, and the F1 score. Fairness of classification predictions leveraged the "four-fifths" legal guideline from the US Equal Employment Opportunity Commission and fairness of regression predictions leveraged established MCID thresholds associated with each outcome measure.

Results: For both aTSA and rTSA clinical outcome predictions, only minor differences in MAE were observed between patients of different ethnicity, sex, and age. Evaluation of prediction fairness demonstrated that 0 of 486 MCID (0%) and only 3 of 486 substantial clinical benefit (0.6%) classification predictions were outside the 20% fairness boundary and only 14 of 972 (1.4%) regression predictions were outside of the MCID fairness boundary. Hispanic and Black patients were more likely to have ML predictions out of fairness tolerance for aTSA and rTSA. Additionally, patients <60 years old were more likely to have ML predictions out of fairness tolerance for rTSA. No disparate predictions were identified for sex and no disparate regression predictions were observed for forward elevation, internal rotation score, American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form score, or global shoulder function.

Conclusion: The ML algorithms analyzed in this study accurately predict clinical outcomes after aTSA and rTSA for patients of different ethnicity, sex, and age, where only 1.4% of regression predictions and only 0.3% of classification predictions were out of fairness tolerance using the proposed fairness evaluation method and acceptance criteria. Future work is required to externally validate these ML algorithms to ensure they are equally accurate for all legally protected patient groups.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jse.2023.08.005DOI Listing

Publication Analysis

Top Keywords

predictions
12
shoulder arthroplasty
12
regression predictions
12
classification predictions
12
fairness accuracy
8
clinical
8
clinical outcomes
8
reverse total
8
total shoulder
8
patients
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!