Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information.
View Article and Find Full Text PDFHuman probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments.
View Article and Find Full Text PDFBayesian approaches presuppose that following the coherence conditions of probability theory makes probabilistic judgments more accurate. But other influential theories claim accurate judgments (with high "ecological rationality") do not need to be coherent. Empirical results support these latter theories, threatening Bayesian models of intelligence; and suggesting, moreover, that "heuristics and biases" research, which focuses on violations of coherence, is largely irrelevant.
View Article and Find Full Text PDFIn 1956, Brunswik proposed a definition of what he called intuitive and analytic cognitive processes, not in terms of verbally specified properties, but operationally based on the observable error distributions. In the decades since, the diagnostic value of error distributions has generally been overlooked, arguably because of a long tradition to consider the error as exogenous (and irrelevant) to the process. Based on Brunswik's ideas, we develop the precise/not precise (PNP) model, using a mixture distribution to model the proportion of error-perturbed versus error-free executions of an algorithm, to determine if Brunswik's claims can be replicated and extended.
View Article and Find Full Text PDFIn this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk - the most complex tasks from the standpoint of probability theory - while they should shift to probability theory or exemplar memory with independence between the assets.
View Article and Find Full Text PDF