Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass.
View Article and Find Full Text PDFThis letter comments on the report "Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods" recently released by the President's Council of Advisors on Science and Technology (PCAST). The report advocates a procedure for evaluation of forensic evidence that is a two-stage procedure in which the first stage is "match"/"non-match" and the second stage is empirical assessment of sensitivity (correct acceptance) and false alarm (false acceptance) rates. Almost always, quantitative data from feature-comparison methods are continuously-valued and have within-source variability.
View Article and Find Full Text PDFProcedures are reviewed and recommendations made for the choice of the size of a sample to estimate the characteristics (sometimes known as parameters) of a population consisting of discrete items which may belong to one and only one of a number of categories with examples drawn from forensic science. Four sampling procedures are described for binary responses, where the number of possible categories is only two, e.g.
View Article and Find Full Text PDFA random effects model using two levels of hierarchical nesting has been applied to the calculation of a likelihood ratio as a solution to the problem of comparison between two sets of replicated multivariate continuous observations where it is unknown whether the sets of measurements shared a common origin. Replicate measurements from a population of such measurements allow the calculation of both within-group and between-group variances/covariances. The within-group distribution has been modelled assuming a Normal distribution, and the between-group distribution has been modelled using a kernel density estimation procedure.
View Article and Find Full Text PDFErrors in sample handling or test interpretation may cause false positives in forensic DNA testing. This article uses a Bayesian model to show how the potential for a false positive affects the evidentiary value of DNA evidence and the sufficiency of DNA evidence to meet traditional legal standards for conviction. The Bayesian analysis is contrasted with the "false positive fallacy," an intuitively appealing but erroneous alternative interpretation.
View Article and Find Full Text PDFJ Forensic Sci
September 2002
A consignment of individual packages is thought to contain illegal material, such as drugs, in some or all of the packages. A sample from the consignment is inspected and the quantity of drugs in each package of the sample is measured. It is desired to estimate the total quantity of drugs in the consignment.
View Article and Find Full Text PDF