Background: There has been a rapid growth in the publication of new prediction models relevant to child and adolescent mental health. However, before their implementation into clinical services, it is necessary to appraise the quality of their methods and reporting. We conducted a systematic review of new prediction models in child and adolescent mental health, and examined their development and validation.
Method: We searched five databases for studies developing or validating multivariable prediction models for individuals aged 18 years old or younger from 1 January 2018 to 18 February 2021. Quality of reporting was assessed using the Transparent Reporting of a multivariable prediction models for Individual Prognosis Or Diagnosis checklist, and quality of methodology using items based on expert guidance and the PROBAST tool.
Results: We identified 100 eligible studies: 41 developing a new prediction model, 48 validating an existing model and 11 that included both development and validation. Most publications ( = 75) reported a model discrimination measure, while 26 investigations reported calibration. Of 52 new prediction models, six (12%) were for suicidal outcomes, 18 (35%) for future diagnosis, five (10%) for child maltreatment. Other outcomes included violence, crime, and functional outcomes. Eleven new models (21%) were developed for use in high-risk populations. Of development studies, around a third were sufficiently statistically powered ( = 16%, 31%), while this was lower for validation investigations ( = 12, 25%). In terms of performance, the discrimination (as measured by the C-statistic) for new models ranged from 0.57 for a tool predicting ADHD diagnosis in an external validation sample to 0.99 for a machine learning model predicting foster care permanency.
Conclusions: Although some tools have recently been developed for child and adolescent mental health for prognosis and child maltreatment, none can be currently recommended for clinical practice due to a combination of methodological limitations and poor model performance. New work needs to use ensure sufficient sample sizes, representative samples, and testing of model calibration.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242964 | PMC |
http://dx.doi.org/10.1002/jcv2.12034 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!