We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent-environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as "evidence" for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2014.06.008 | DOI Listing |
BMC Med Imaging
January 2025
Electronics and Communications, Arab Academy for Science, Heliopolis, Cairo, 2033, Egypt.
Invasive breast cancer diagnosis and treatment planning require an accurate assessment of human epidermal growth factor receptor 2 (HER2) expression levels. While immunohistochemical techniques (IHC) are the gold standard for HER2 evaluation, their implementation can be resource-intensive and costly. To reduce these obstacles and expedite the procedure, we present an efficient deep-learning model that generates high-quality IHC-stained images directly from Hematoxylin and Eosin (H&E) stained images.
View Article and Find Full Text PDFPhys Eng Sci Med
January 2025
Department of Electronics and Communication Engineering, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh, 534202, India.
J Voice
January 2025
Division of Phoniatrics and Pediatric Audiology at the Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany.
Objectives: This study investigates the use of sustained phonations recorded during high-speed videoendoscopy (HSV) for machine learning-based assessment of hoarseness severity (H). The performance of this approach is compared with conventional recordings obtained during voice therapy to evaluate key differences and limitations of HSV-derived acoustic recordings.
Methods: A database of 617 voice recordings with a duration of 250 ms was gathered during HSV examination (HS).
Sci Rep
January 2025
College of Computer Sciences, Anhui University, Hefei, 230039, China.
Decoding the semantic categories of complex sceneries is fundamental to numerous artificial intelligence (AI) infrastructures. This work presents an advanced selection of multi-channel perceptual visual features for recognizing scenic images with elaborate spatial structures, focusing on developing a deep hierarchical model dedicated to learning human gaze behavior. Utilizing the BING objectness measure, we efficiently localize objects or their details across varying scales within scenes.
View Article and Find Full Text PDFNat Commun
January 2025
Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
Humans can flexibly change rules to categorize sensory stimuli, but their performance degrades immediately after a task switch. This switch cost is believed to reflect a limitation in cognitive control, although the bottlenecks remain controversial. Here, we show that humans exhibit a brief reduction in the efficiency of using sensory inputs to form a decision after a rule change.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!