Purpose: Different ML models were compared to predict toxicity in RT on a large cohort (n = 1314).

Methods: The endpoint was RTOG G2/G3 acute toxicity, resulting in 204/1314 patients with the event. The dataset, including 25 clinical, anatomical, and dosimetric features, was split into 984 for training and 330 for internal tests. The dataset was standardized; features with a high -value at univariate LR and with Spearman ρ>0.8 were excluded; synthesized data of the minority were generated to compensate for class imbalance. Twelve ML methods were considered. Model optimization and sequential backward selection were run to choose the best models with a parsimonious feature number. Finally, feature importance was derived for every model.

Results: The model's performance was compared on a training-test dataset over different metrics: the best performance model was LightGBM. Logistic regression with three variables (LR3) selected via bootstrapping showed performances similar to the best-performing models. The AUC of test data is slightly above 0.65 for the best models (highest value: 0.662 with LightGBM).

Conclusions: No model performed the best for all metrics: more complex ML models had better performances; however, models with just three features showed performances comparable to the best models using many (n = 13-19) features.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10930533PMC
http://dx.doi.org/10.3390/cancers16050934DOI Listing

Publication Analysis

Top Keywords

best models
12
models
8
best
5
comparing performances
4
performances predictive
4
predictive models
4
models toxicity
4
toxicity radiotherapy
4
radiotherapy breast
4
breast cancer
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!