Motivation: We address a common problem in large-scale data analysis, and especially the field of genetics, the huge-scale testing problem, where millions to billions of hypotheses are tested together creating a computational challenge to control the inflation of the false discovery rate. As a solution we propose an alternative algorithm for the famous Linear Step Up procedure of Benjamini and Hochberg.
Results: Our algorithm requires linear time and does not require any P-value ordering. It permits separating huge-scale testing problems arbitrarily into computationally feasible sets or chunks Results from the chunks are combined by our algorithm to produce the same results as the controlling procedure on the entire set of tests, thus controlling the global false discovery rate even when P-values are arbitrarily divided. The practical memory usage may also be determined arbitrarily by the size of available memory.
Availability And Implementation: R code is provided in the supplementary material.
Contact: sbatista@cs.princeton.edu
Supplementary Information: Supplementary data are available at Bioinformatics online.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/bioinformatics/btw029 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!