Background: Quantitative researchers can use permutation tests to conduct null hypothesis significance testing without resorting to complicated distribution theory. A permutation test can reach conclusions in hypothesis testing that are the same as those of better-known tests such as the t-test but is much easier to understand and implement.
Aim: To introduce and explain permutation tests using two real examples of independent and dependent t-tests and their corresponding permutation tests.
Cohen's - a common effect size - contains a positive bias. The traditional bias correction, based on strict distribution assumption, does not always work for a small study with limited data. The non-parametric bootstrapping is not limited by distribution assumption and can be used to remove the bias in Cohen's .
View Article and Find Full Text PDFThe estimates of intraclass correlations are known to be biased, but there are few analytical ways to assess the amount of bias. The analytical approach requires the normality assumption to estimate bias. Bootstrap requires no such assumption and can, therefore, be used to estimate bias, regardless of the model assumption.
View Article and Find Full Text PDFItem difficulty and discrimination index are often used to evaluate test items and diagnose possible issues in true score theory. The two statistics are more related than the literature suggests. In particular, the discrimination index can be mathematically determined by the item difficulty and the correlation between the item performance and the total test score.
View Article and Find Full Text PDFThe Pearson correlation coefficient can be translated to a common language effect size, which shows the probability of obtaining a certain value on one variable, given the value on the other variable. This common language effect size makes the size of a correlation coefficient understandable to laypeople. Three examples are provided to demonstrate the application of the common language effect size in interpreting Pearson correlation coefficients and multiple correlation coefficients.
View Article and Find Full Text PDFTher Innov Regul Sci
July 2015
Sample sizes affect the precision of the confidence interval for the common effect size in a meta-analysis, which includes a number of independent studies of varying sizes. This paper provides a simplified method to estimate the precision of the confidence interval for the common effect size by using the number of independent studies and the average sample size of the independent studies. The simplified method proves to be very accurate for the retrospective meta-analyses.
View Article and Find Full Text PDFTo find a common language effect size of multivariate outcomes, we convert the standardized multivariate effect size (Mahalanobis distance) to a probability of a randomly selected subject from one population having a larger discriminant function than a randomly selected subject from another population. This probability is simple to calculate and comprehensible to laypeople. It can serve as the multivariate common language effect size to compare not only two groups but also more than two groups.
View Article and Find Full Text PDFThis article shows how to compute statistical power for testing the main effect of treatment in three-arm cluster randomized trials. Using orthogonal coding, we derive the exact test statistic of the treatment effect and its non-central distribution. The non-centrality parameter in the omnibus test is found to be related to the non-centrality parameters in the contrast tests.
View Article and Find Full Text PDFBr J Math Stat Psychol
May 2014
We derive the statistical power functions in multi-site randomized trials with multiple treatments at each site, using multi-level modelling. An F statistic is used to test multiple parameters in the multi-level model instead of the Wald chi square test as suggested in the current literature. The F statistic is shown to be more conservative than the Wald statistic in testing any overall treatment effect among the multiple study conditions.
View Article and Find Full Text PDFThis article provides a way to determine sample size for the confidence interval of the linear contrast of treatment means in analysis of covariance (ANCOVA) without prior knowledge of the actual covariate means and covariate sum of squares, which are modeled as a t statistic. Using the t statistic, one can calculate the appropriate sample size to achieve the desired probability of obtaining a specified width in the confidence interval of the covariate-adjusted linear contrast.
View Article and Find Full Text PDFCovariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average covariate difference between the treatment and control condition. The standardized covariate difference must not exceed an upper bound in order to gain precision in covariate adjusted analysis.
View Article and Find Full Text PDFThe statistical power of a hypothesis test is closely related to the precision of the accompanying confidence interval. In the case of a z-test, the width of the confidence interval is a function of statistical power for the planned study. If minimum effect size is used in power analysis, the width of the confidence interval is the minimum effect size times a multiplicative factor φ.
View Article and Find Full Text PDFThe use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) .
View Article and Find Full Text PDFBr J Math Stat Psychol
May 2009
The width of the confidence interval for mean difference can be viewed as a random variable. Overlooking its stochastic nature may lead to a serious underestimate of the sample size required to obtain an adequate probability of achieving the desired width for the confidence interval. The probability of achieving a certain width can either be an unconditional probability or a conditional probability given that the confidence interval includes the true parameter.
View Article and Find Full Text PDF