It is commonly believed that if a two-way analysis of variance (ANOVA) is carried out in R, then reported p-values are correct. This article shows that this is not always the case. Results can vary from non-significant to highly significant, depending on the choice of options. The user must know exactly which options result in correct p-values, and which options do not. Furthermore, it is commonly supposed that analyses in SAS and R of simple balanced experiments using mixed-effects models result in correct p-values. However, the simulation study of the current article indicates that frequency of Type I error deviates from the nominal value. The objective of this article is to compare SAS and R with respect to correctness of results when analyzing small experiments. It is concluded that modern functions and procedures for analysis of mixed-effects models are sometimes not as reliable as traditional ANOVA based on simple computations of sums of squares.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10688674PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0295066PLOS

Publication Analysis

Top Keywords

result correct
8
correct p-values
8
mixed-effects models
8
notes correctness
4
p-values
4
correctness p-values
4
p-values analyzing
4
analyzing experiments
4
experiments sas
4
sas commonly
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!