Purpose: To develop and test a machine-learning-based model to predict primary care and other specialties using Medicare claims data.

Methods: We used 2014-2016 prescription and procedure Medicare data to train 3 sets of random forest classifiers (prescription only, procedure only, and combined) to predict specialty. Self-reported specialties were condensed to 27 categories. Physicians were assigned to testing and training cohorts, and random forest models were trained and then applied to 2014-2016 data sets for the testing cohort to generate a series of specialty predictions. Comparing the predicted specialty to self-report, we assessed performance with F1 scores and area under the receiver operating characteristic curve (AUROC) values.

Results: A total of 564,986 physicians were included. The combined model had a greater aggregate (macro) F1 score (0.876) than the prescription-only (0.745; <.01) or procedure-only (0.821; <.01) model. Mean F1 scores across specialties in the combined model ranged from 0.533 to 0.987. The mean F1 score was 0.920 for primary care. The mean AUROC value for the combined model was 0.992, with values ranging from 0.982 to 0.999. The AUROC value for primary care was 0.982.

Conclusions: This novel approach showed high performance and provides a near real-time assessment of current primary care practice. These findings have important implications for primary care workforce research in the absence of accurate data.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7358033PMC
http://dx.doi.org/10.1370/afm.2550DOI Listing

Publication Analysis

Top Keywords

predict primary
8
primary care
8
prescription procedure
8
random forest
8
machine learning
4
learning predict
4
care advance
4
advance workforce
4
workforce purpose
4
purpose develop
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!