We present a method for measuring the efficacy of eyewitness identification procedures by applying fundamental principles of information theory. The resulting measure evaluates the expected information gain (EIG) for an identification attempt, a single value that summarizes an identification procedure's overall potential for reducing uncertainty about guilt or innocence across all possible witness responses. In a series of demonstrations, we show that EIG often disagrees with existing measures (e.
View Article and Find Full Text PDFBehav Res Methods
February 2021
In a standard eyewitness lineup scenario, a witness observes a culprit commit a crime and is later asked to identify the culprit from a set of faces, the lineup. Signal detection theory (SDT), a powerful modeling framework for analyzing data, has recently become a common way to analyze lineup data. The goal of this paper is to introduce a new R package, sdtlu (Signal Detection Theory - LineUp), that streamlines and automates the SDT analysis of lineup data.
View Article and Find Full Text PDFBackground: The majority of eyewitness lineup studies are laboratory-based. How well the conclusions of these studies, including the relationship between confidence and accuracy, generalize to real-world police lineups is an open question. Signal detection theory (SDT) has emerged as a powerful framework for analyzing lineups that allows comparison of witnesses' memory accuracy under different types of identification procedures.
View Article and Find Full Text PDFThe nature of the relationship between deductive and inductive reasoning is a hotly debated topic. A key question is whether there is a single dimension of evidence underlying both deductive and inductive judgments. Following Rips (2001), Rotello and Heit (2009) and Heit and Rotello (2010) implemented one- and two-dimensional models grounded in signal detection theory to assess predictions for receiver operating characteristic data (ROCs), and concluded in favor of the two-dimensional model.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
July 2019
One perennially important question for theories of sentence comprehension is whether the human sentence processing mechanism is parallel (i.e., it simultaneously represents multiple syntactic analyses of linguistic input) or serial (i.
View Article and Find Full Text PDFDirect replication is valuable but should not be elevated over other worthwhile research practices, including conceptual replication and checking of statistical assumptions. As noted by Rotello et al. (2015), replicating studies without checking the statistical assumptions can lead to increased confidence in incorrect conclusions.
View Article and Find Full Text PDFWe outline an evolution process for tongue elements composed of poly(-aryleneethynylene)s (PAE) and detergents, resulting in a chemical tongue (24 elements) that discerns antibiotics. Cross-breeding of this new tongue with tongue elements that consist of simple poly(-phenyleneethynylene)s (PPE) at different pH-values leads to an enlarged sensor array, composed of 30 elements. This tongue was pruned, employing principal component analysis.
View Article and Find Full Text PDFWe report a nanosensor that uses cell lysates to rapidly profile the tumorigenicity of cancer cells. This sensing platform uses host-guest interactions between cucurbit[7]uril and the cationic headgroup of a gold nanoparticle to non-covalently modify the binding of three fluorescent proteins of a multi-channel sensor in situ. This approach doubles the number of output channels to six, providing single-well identification of cell lysates with 100% accuracy.
View Article and Find Full Text PDFCogn Res Princ Implic
September 2016
How should the accuracy of eyewitness identification decisions be measured, so that best practices for identification can be determined? This fundamental question is under intense debate. One side advocates for continued use of a traditional measure of identification accuracy, known as the , whereas the other side argues that receiver operating characteristic curves (ROCs) should be used instead because diagnosticity is confounded with response bias. Diagnosticity proponents have offered several criticisms of ROCs, which we show are either false or irrelevant to the assessment of eyewitness accuracy.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
July 2015
The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g.
View Article and Find Full Text PDFScreening methods that use traditional genomic, transcriptional, proteomic and metabonomic signatures to characterize drug mechanisms are known. However, they are time consuming and require specialized equipment. Here, we present a high-throughput multichannel sensor platform that can profile the mechanisms of various chemotherapeutic drugs in minutes.
View Article and Find Full Text PDFThere is a replication crisis in science, to which psychological research has not been immune: Many effects have proven uncomfortably difficult to reproduce. Although the reliability of data is a serious concern, we argue that there is a deeper and more insidious problem in the field: the persistent and dramatic misinterpretation of empirical results that replicate easily and consistently. Using a series of four highly studied "textbook" examples from different research domains (eyewitness memory, deductive reasoning, social psychology, and child welfare), we show how simple unrecognized incompatibilities among dependent measures, analysis tools, and the properties of data can lead to fundamental interpretive errors.
View Article and Find Full Text PDFTraditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research.
View Article and Find Full Text PDFWe tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal variance model predicts little or no slope difference between the source-correct and source-incorrect functions.
View Article and Find Full Text PDFStudies of the belief bias effect in syllogistic reasoning have relied on three traditional difference score measures: the logic index, belief index, and interaction index. Dube, Rotello, and Heit (2010, 2011) argued that the interaction index incorrectly assumes a linear receiver operating characteristic (ROC). Here, all three measures are addressed.
View Article and Find Full Text PDFRecognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category.
View Article and Find Full Text PDFIn "A Critical Comparison of Discrete-State and Continuous Models of Recognition Memory: Implications for Recognition and Beyond," Pazzaglia, Dube, and Rotello (2013) explored the threshold multinomial processing tree (MPT) framework as applied to several domains of experimental psychology. Pazzaglia et al. concluded that threshold MPT analyses require assumptions at the representation and measurement levels that are contradicted by existing data in several domains.
View Article and Find Full Text PDFReliance on remembered facts or events requires memory for their sources, that is, the contexts in which those facts or events were embedded. Understanding of source retrieval has been stymied by the fact that uncontrolled fluctuations of attention during encoding can cloud results of key importance to theoretical development. To address this issue, we combined electrophysiology (high-density electroencephalogram, EEG, recordings) with computational modeling of behavioral results.
View Article and Find Full Text PDFMultinomial processing tree (MPT) models such as the single high-threshold, double high-threshold, and low-threshold models are discrete-state decision models that map internal cognitive events onto overt responses. The apparent benefit of these models is that they provide independent measures of accuracy and response bias, a claim that has motivated their frequent application in many areas of psychological science including perception, item and source memory, social cognition, reasoning, educational testing, eyewitness testimony, and psychopathology. Before appropriate conclusions about a given analysis can be drawn, however, one must first confirm that the model's assumptions about the underlying structure of the data are valid.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
September 2013
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope effect, and we test them in 3 experiments.
View Article and Find Full Text PDFA classic question in the recognition memory literature is whether retrieval is best described as a continuous-evidence process consistent with signal detection theory (SDT), or a threshold process consistent with many multinomial processing tree (MPT) models. Because receiver operating characteristics (ROCs) based on confidence ratings are typically curved as predicted by SDT, this model has been preferred in many studies of recognition memory (Wixted, 2007). Recently, Bröder and Schütz (2009) argued that curvature in ratings ROCs may be produced by variability in scale usage; therefore, ratings ROCs are not diagnostic in deciding between the two approaches.
View Article and Find Full Text PDFKoen and Yonelinas (2010; K&Y) reported that mixing classes of targets that had short (weak) or long (strong) study times had no impact on ʐROC slope, contradicting the predictions of the encoding variability hypothesis. We show that they actually derived their predictions from a mixture unequal-variance signal detection (UVSD) model, which assumes 2 discrete levels of strength instead of the continuous variation in learning effectiveness proposed by the encoding variability hypothesis. We demonstrated that the mixture UVSD model predicts an effect of strength mixing only when there is a large performance difference between strong and weak targets, and the strength effect observed by K&Y was too small to produce a mixing effect.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
January 2012
In recognition memory, a classic finding is that receiver operating characteristics (ROCs) are curvilinear. This has been taken to support the fundamental assumptions of signal detection theory (SDT) over discrete-state models such as the double high-threshold model (2HTM), which predicts linear ROCs. Recently, however, Bröder and Schütz (2009) challenged this argument by noting that most of the data on which support for SDT is based have involved confidence ratings.
View Article and Find Full Text PDF