When using a statistical test for automatically detecting evoked potentials, the number of stimuli presented to the subject (the sample size for the statistical test) should be specified at the outset. For evoked response detection, this may be inefficient, i.e., because the signal-to-noise ratio (SNR) of the response is not known in advance, the user would usually err on the cautious side and use a relatively high number of stimuli to ensure adequate statistical power. A more efficient approach is to apply the statistical test repeatedly to the accumulating data over time, as this allows the test to be stopped early for the high SNR responses (thus reducing test time), or later for the low SNR responses. The caveat is that the critical decision boundaries for rejecting the null hypothesis need to be adjusted if the intended type-I error rate is to be obtained. This study presents an intuitive and flexible method for controlling the type-I error rate for sequentially applied statistical tests. The method is built around the discrete convolution of truncated probability density functions, which allows the null distribution for the test statistic to be constructed at each stage of the sequential analysis. Because the null distribution remains tractable, the procedure for finding the stage-wise critical decision boundaries is greatly simplified. The method also permits data-driven adaptations (using data from previous stages) to both the sample size and the statistical test, which offers new opportunities to speed up testing for evoked response detection.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TBME.2019.2919696 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!