Published reports of randomized clinical trials tend to emphasize the statistical significance of the treatment effect (p values) rather than its magnitude (effect size), although the clinical importance of the evidence depends more on the latter than on the former. We, therefore, compared the standard measures of effect size (relative and absolute risk reduction) and nonstandard composites of these measures (the product of the relative and absolute risk reductions and information content) with conventional assessments of statistical significance for 100 trials published in The New England Journal of Medicine. The p values were reported for 100% of the trials, relative risk reductions for 89%, and absolute risk reductions for 39%. Only 35% of trials reported both standard measures, and none reported either of the nonstandard measures. The standard measures correlated weakly (unexplained variance 77%). In contrast, the nonstandard measures correlated highly (unexplained variance 1.3%) but correlated weakly with statistical significance (unexplained variance 83%). Consequently, 25% of the trial results were adjudged "clinically unimportant" despite being "statistically significant." In conclusion, our results have shown that composite measures of effect size communicate the clinical importance of trial results better than do conventional assessments of risk reduction and statistical significance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.amjcard.2012.10.047 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!