Background: The optimal screening interval for diabetic retinopathy (DR) remains controversial. This study aimed to develop a risk algorithm to predict the individual risk of referable sight-threatening diabetic retinopathy (STDR) in a mainly Chinese population and to provide evidence for risk-based screening intervals.
Methods: The retrospective cohort data from 117,418 subjects who received systematic DR screening in Hong Kong between 2010 and 2016 were included to develop and validate the risk algorithm using a parametric survival model. The risk algorithm can be used to predict the individual risk of STDR within a specific time interval, or the time to reach a specific risk margin and thus to allocate a screening interval. The calibration performance was assessed by comparing the cumulative STDR events versus predicted risk over 2 years, and discrimination by using receiver operative characteristics (ROC) curve.
Results: Duration of diabetes, glycosylated hemoglobin, systolic blood pressure, presence of chronic kidney disease, diabetes medication, and age were included in the risk algorithm. The validation of prediction performance showed that there was no significant difference between predicted and observed STDR risks in males (5.6% vs. 5.1%, P=0.724) or females (4.8% vs. 4.6%, P=0.099). The area under the receiver operating characteristic curve was 0.80 (95% confidence interval [CI], 0.78 to 0.81) for males and 0.81 (95% CI, 0.79 to 0.83) for females.
Conclusion: The risk algorithm has good prediction performance for referable STDR. Using a risk-based screening interval allows us to allocate screening visits disproportionally more to those at higher risk, while reducing the frequency of screening of lower risk people.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.4093/dmj.2024.0142 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!