Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.

PLoS Comput Biol

Department of Psychology, Columbia University, New York, New York, United States of America.

Published: March 2016

Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4583549PMC
http://dx.doi.org/10.1371/journal.pcbi.1004523DOI Listing

Publication Analysis

Top Keywords

transitive inference
12
reinforcement learning
8
implicit updating
4
updating explains
4
explains transitive
4
inference performance
4
performance betasort
4
betasort model
4
model transitive
4
inference ability
4

Similar Publications

Transitive inference allows people to infer new relations between previously experienced premises. It has been hypothesized that this logical thinking relies on a mental schema that spatially organizes elements, facilitating inferential insights. However, recent evidence challenges the need for these complex cognitive processes.

View Article and Find Full Text PDF

Transitive reasoning in the adult domestic hen in a six-term series task.

Anim Cogn

November 2024

CNRS, INRAE, Université de Tours, PRC (Physiologie de la Reproduction et des Comportements), Nouzilly, Indre-et-Loire, F-37380, France.

Transitive inference (TI) is a disjunctive syllogism that allows an individual to indirectly infer a relationship between two components, by knowing their respective relationship to a third component (if A > B and B > C, then A > C). The common procedure is the 5-term series task, in which individuals are tested on indirect, unlearned relations. Few bird species have been tested for TI to date, which limits our knowledge of the phylogenetic spread of such reasoning ability.

View Article and Find Full Text PDF

Effects of posttransfer feedback informativeness in a transitive inference task.

Mem Cognit

October 2024

Department of Psychology, Reed College, 3203 SE Woodstock Blvd., Portland, Oregon, 97202, USA.

Transitive inference (TI), referring to one's ability to learn that if A > B and B > C, one can infer that A > C, is a form of serial learning that has been tested using a variety of experimental protocols. An element of most of these protocols is the presentation of some form of visual corrective feedback to help inform naïve participants about the nature of the task. Therefore, corrective feedback is often used as a critical tool for experimental TI.

View Article and Find Full Text PDF

Transitive inference (TI) is a cognitive task that assesses an organism's ability to infer novel relations between items based on previously acquired knowledge. TI is known for exhibiting various behavioral and neural signatures, such as the serial position effect (SPE), symbolic distance effect (SDE), and the brain's capacity to maintain and merge separate ranking models. We propose a novel framework that casts TI as a probabilistic preference learning task, using one-parameter Mallows models.

View Article and Find Full Text PDF
Article Synopsis
  • * Advances in high-throughput sequencing and computational techniques have led to new methods for GRN inference, notably through the use of graph neural networks (GNNs), although current models often struggle to capture long-distance interactions.
  • * The paper presents a new model called Hierarchical Graph Transformer with Contrastive Learning for GRN (HGTCGRN), which effectively represents gene functions and improves GRN inference by utilizing hierarchical structures and gene ontology information for better performance.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!