Publications by authors named "Raghu Kacker"

Combinatorial testing typically considers a single input model and creates a single test set that achieves -way coverage. This paper addresses the problem of combinatorial test generation for multiple input models with shared parameters. We formally define the problem and propose an efficient approach to generating multiple test sets, one for each input model, that together satisfy -way coverage for all of these input models while minimizing the amount of redundancy between these test sets.

View Article and Find Full Text PDF

The signal contribution of the Guide to the Expression of Uncertainty in Measurement (GUM) is the operational view of the uncertainty in measurement as a parameter, associated with a result of a measurement (measured value), that characterizes the dispersion of the values that could reasonably be attributed (assigned) to the measurand. Subsequent documents from the Joint Committee for Guides in Metrology (JCGM) have restored an essentially pre-GUM view of uncertainty and called it a coverage interval. The idea of a coverage interval requires the measurand to have a unique true value; therefore, it does not apply to an ordinary measurand that has a range of true values.

View Article and Find Full Text PDF

ROC analysis involving two large datasets is an important method for analyzing statistics of interest for decision making of a classifier in many disciplines. And data dependency due to multiple use of the same subjects exists ubiquitously in order to generate more samples because of limited resources. Hence, a two-layer data structure is constructed and the nonparametric two-sample two-layer bootstrap is employed to estimate standard errors of statistics of interest derived from two sets of data, such as a weighted sum of two probabilities.

View Article and Find Full Text PDF

Cryptographic hash functions are security-critical algorithms with many practical applications, notably in digital signatures. Developing an approach to test them can be particularly difficult, and bugs can remain unnoticed for many years. We revisit the NIST hash function competition, which was used to develop the SHA-3 standard, and apply a new testing strategy to all available reference implementations.

View Article and Find Full Text PDF

The data dependency due to multiple use of the same subjects has impact on the standard error (SE) of the detection cost function (DCF) in speaker recognition evaluation. The DCF is defined as a weighted sum of the probabilities of type I and type II errors at a given threshold. A two-layer data structure is constructed: target scores are grouped into target sets based on the dependency, and likewise for non-target scores.

View Article and Find Full Text PDF

Background: Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods.

View Article and Find Full Text PDF

A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved.

View Article and Find Full Text PDF

Empirical studies have shown that most software interaction faults involve one or two variables interacting, with progressively fewer triggered by three or more, and no failure has been reported involving more than six variables interacting. This paper introduces a hypothesis for the origin of this distribution, with implications for removal of interaction faults and reliability growth.

View Article and Find Full Text PDF

The nonparametric two-sample bootstrap is applied to computing uncertainties of measures in ROC analysis on large datasets in areas such as biometrics, speaker recognition, etc., when the analytical method cannot be used. Its validation was studied by computing the SE of the area under ROC curve using the well-established analytical Mann-Whitney-statistic method and also using the bootstrap.

View Article and Find Full Text PDF

The mission of the Joint Committee for Guides in Metrology (JCGM) is to maintain and promote the use of the Guide to the Expression of Uncertainty in Measurement (GUM) and the International Vocabulary of Metrology (VIM, second edition). The JCGM has produced the third edition of the VIM (referred to as VIM3) and a number of documents; some of which are referred to as supplements to the GUM. We are concerned with the Supplement 1 (GUM-S1) and the document JCGM 104.

View Article and Find Full Text PDF

International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met.

View Article and Find Full Text PDF

Anharmonic calculations using vibrational perturbation theory are known to provide near-spectroscopic accuracy when combined with high-level ab initio potential energy functions. However, performance with economical, popular electronic structure methods is less well characterized. We compare the accuracy of harmonic and anharmonic predictions from Hartree-Fock, second-order perturbation, and density functional theories combined with 6-31G(d) and 6-31+G(d,p) basis sets.

View Article and Find Full Text PDF

According to the Guide to the Expression of Uncertainty in Measurement (GUM), a result of measurement consists of a measured value together with its associated standard uncertainty. The measured value and the standard uncertainty are interpreted as the expected value and the standard deviation of a state-of-knowledge probability distribution attributed to the measurand. We discuss the term metrological compatibility introduced by the International Vocabulary of Metrology, third edition (VIM3) for lack of significant differences between two or more results of measurement for the same measurand.

View Article and Find Full Text PDF

In receiver operating characteristic (ROC) analysis, the sampling variability can result in uncertainties of performance measures. Thus, while evaluating and comparing the performances of algorithms, the measurement uncertainties must be taken into account. The key issue is how to calculate the uncertainties of performance measures in ROC analysis.

View Article and Find Full Text PDF

In some metrology applications multiple results of measurement for a common measurand are obtained and it is necessary to determine whether the results agree with each other. A result of measurement based on the Guide to the Expression of Uncertainty in Measurement (GUM) consists of a measured value together with its associated standard uncertainty. In the GUM, the measured value is regarded as the expected value and the standard uncertainty is regarded as the standard deviation, both known values, of a state-of-knowledge probability distribution.

View Article and Find Full Text PDF

To predict the vibrational spectra of molecules, ab initio calculations are often used to compute harmonic frequencies, which are usually scaled by empirical factors as an approximate correction for errors in the force constants and for anharmonic effects. Anharmonic computations of fundamental frequencies are becoming increasingly popular. We report scaling factors, along with their associated uncertainties, for anharmonic (second-order perturbation theory) predictions from HF, MP2, and B3LYP calculations using the 6-31G(d) and 6-31+G(d,p) basis sets.

View Article and Find Full Text PDF

Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic.

View Article and Find Full Text PDF

Covering arrays are structures for well-representing extremely large input spaces and are used to efficiently implement blackbox testing for software and hardware. This paper proposes refinements over the In-Parameter-Order strategy (for arbitrary t). When constructing homogeneous-alphabet covering arrays, these refinements reduce runtime in nearly all cases by a factor of more than 5 and in some cases by factors as large as 280.

View Article and Find Full Text PDF

Vibrational frequencies determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the errors arising from vibrational anharmonicity and incomplete treatment of electron correlation. These errors are not random but are systematic biases.

View Article and Find Full Text PDF

The random-effects model is often used for meta-analysis of clinical studies. The method explicitly accounts for the heterogeneity of studies through a statistical parameter representing the inter-study variation. We discuss several iterative and non-iterative alternative methods for estimating the inter-study variance and hence the overall population treatment effect.

View Article and Find Full Text PDF

This article is a survey of the tables of probability distributions published about or after the publication in 1964 of the Handbook of Mathematical Functions, edited by Abramowitz and Stegun.

View Article and Find Full Text PDF

Permeation-tube moisture generators are used in industry as calibrated sources of water vapor and carrier gas mixtures. Measurements were made using three permeation-tube moisture generators of the type used in the semiconductor industry. This paper describes repeatability and reproducibility standard deviations in measurement of moisture concentration from such generators.

View Article and Find Full Text PDF

The disposal of ready mixed concrete truck wash water and returned plastic concrete is a growing concern for the ready mixed concrete industry. Recently, extended set-retarding admixtures, or stabilizers, which slow or stop the hydration of portland cement have been introduced to the market. Treating truck wash-water or returned plastic concrete with stabilizing admixtures delays its setting and hardening, thereby facilitating the incorporation of these typically wasted materials in subsequent concrete batches.

View Article and Find Full Text PDF