Purpose: To characterize reporting of P values, confidence intervals (CIs), and statistical power in health professions education research (HPER) through manual and computerized analysis of published research reports.
Method: The authors searched PubMed, Embase, and CINAHL in May 2016, for comparative research studies. For manual analysis of abstracts and main texts, they randomly sampled 250 HPER reports published in 1985, 1995, 2005, and 2015, and 100 biomedical research reports published in 1985 and 2015. Automated computerized analysis of abstracts included all HPER reports published 1970-2015.
Results: In the 2015 HPER sample, P values were reported in 69/100 abstracts and 94 main texts. CIs were reported in 6 abstracts and 22 main texts. Most P values (≥77%) were ≤.05. Across all years, 60/164 two-group HPER studies had ≥80% power to detect a between-group difference of 0.5 standard deviations. From 1985 to 2015, the proportion of HPER abstracts reporting a CI did not change significantly (odds ratio [OR] 2.87; 95% CI 1.04, 7.88) whereas that of main texts reporting a CI increased (OR 1.96; 95% CI 1.39, 2.78). Comparison with biomedical studies revealed similar reporting of P values, but more frequent use of CIs in biomedicine. Automated analysis of 56,440 HPER abstracts found 14,867 (26.3%) reporting a P value, 3,024 (5.4%) reporting a CI, and increased reporting of P values and CIs from 1970 to 2015.
Conclusions: P values are ubiquitous in HPER, CIs are rarely reported, and most studies are underpowered. Most reported P values would be considered statistically significant.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/ACM.0000000000001773 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!