AI Article Synopsis

  • Pattern classification learning tasks help understand human cognitive abilities by exploring how individuals learn to classify patterns effectively.
  • These tasks are computationally challenging due to the vast number of potential patterns and rules, requiring learners to simplify and generalize.
  • Experiments reveal diverse human performance, but reinforcement learning-like models effectively explain individual behavior, highlighting the influence of prior knowledge and the use of complex features, leading to predictive accuracy in future responses and personalized teaching methods.

Article Abstract

Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3545760PMC
http://dx.doi.org/10.1073/pnas.1211606110DOI Listing

Publication Analysis

Top Keywords

learning tasks
12
learning
9
classification learning
8
individual learning
8
models
6
individual
5
tasks
5
high-order feature-based
4
feature-based mixture
4
mixture models
4

Similar Publications

A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.

Int J Comput Assist Radiol Surg

January 2025

Advanced Medical Devices Laboratory, Kyushu University, Nishi-ku, Fukuoka, 819-0382, Japan.

Purpose: This paper presents a deep learning approach to recognize and predict surgical activity in robot-assisted minimally invasive surgery (RAMIS). Our primary objective is to deploy the developed model for implementing a real-time surgical risk monitoring system within the realm of RAMIS.

Methods: We propose a modified Transformer model with the architecture comprising no positional encoding, 5 fully connected layers, 1 encoder, and 3 decoders.

View Article and Find Full Text PDF
Article Synopsis
  • Deep learning methods show strong potential for predicting lung cancer risk from CT scans, but there's a need for more comprehensive comparisons and validations of these models in real-world settings.
  • The study reviews 21 state-of-the-art deep learning models, analyzing their performance using CT scans from a subset of the National Lung Screening Trial, with a focus on malignant versus benign classification.
  • Results reveal that 3D deep learning models generally outperformed 2D models, with the best 3D model achieving an AUROC of 0.86 compared to 0.79 for the best 2D model, emphasizing the need to choose appropriate pretrained datasets and model types for effective lung cancer risk prediction.
View Article and Find Full Text PDF

Speech processing involves a complex interplay between sensory and motor systems in the brain, essential for early language development. Recent studies have extended this sensory-motor interaction to visual word processing, emphasizing the connection between reading and handwriting during literacy acquisition. Here we show how language-motor areas encode motoric and sensory features of language stimuli during auditory and visual perception, using functional magnetic resonance imaging (fMRI) combined with representational similarity analysis.

View Article and Find Full Text PDF

Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology.

Med Image Anal

January 2025

Nuffield Department of Medicine, University of Oxford, Oxford, UK; Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK; Ludwig Institute for Cancer Research, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, UK; Oxford National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, UK. Electronic address:

Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales.

View Article and Find Full Text PDF

RAMIE: retrieval-augmented multi-task information extraction with large language models on dietary supplements.

J Am Med Inform Assoc

January 2025

Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN 55455, United States.

Objective: To develop an advanced multi-task large language model (LLM) framework for extracting diverse types of information about dietary supplements (DSs) from clinical records.

Methods: We focused on 4 core DS information extraction tasks: named entity recognition (2 949 clinical sentences), relation extraction (4 892 sentences), triple extraction (2 949 sentences), and usage classification (2 460 sentences). To address these tasks, we introduced the retrieval-augmented multi-task information extraction (RAMIE) framework, which incorporates: (1) instruction fine-tuning with task-specific prompts; (2) multi-task training of LLMs to enhance storage efficiency and reduce training costs; and (3) retrieval-augmented generation, which retrieves similar examples from the training set to improve task performance.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!