Area TE is required for normal learning of visual categories based on perceptual similarity. To evaluate whether category learning changes neural activity in area TE, we trained two monkeys (both male) implanted with multielectrode arrays to categorize natural images of cats and dogs. Neural activity during a passive viewing task was compared pre- and post-training. After the category training, the accuracy of abstract category decoding improved. Single units became more category selective, the proportion of single units with category selectivity increased, and units sustained their category-specific responses for longer. Visual category learning thus appears to enhance category separability in area TE by driving changes in the stimulus selectivity of individual neurons and by recruiting more units to the active network.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622174 | PMC |
http://dx.doi.org/10.1523/JNEUROSCI.0312-24.2024 | DOI Listing |
Interdiscip Sci
December 2024
Institutes of Physical Science and Information Technology, Anhui University, Hefei, 230601, Anhui, China.
High-throughput sequencing has exponentially increased peptide sequences, necessitating a computational method to identify multi-functional therapeutic peptides (MFTP) from their sequences. However, existing computational methods are challenged by class imbalance, particularly in learning effective sequence representations. To address this, we propose PSCFA, a prototypical supervised contrastive learning with a feature augmentation method for MFTP prediction.
View Article and Find Full Text PDFElife
December 2024
Department of Psychology, Stanford University, Stanford, United States.
Organizing the continuous stream of visual input into categories like places or faces is important for everyday function and social interactions. However, it is unknown when neural representations of these and other visual categories emerge. Here, we used steady-state evoked potential electroencephalography to measure cortical responses in infants at 3-4 months, 4-6 months, 6-8 months, and 12-15 months, when they viewed controlled, gray-level images of faces, limbs, corridors, characters, and cars.
View Article and Find Full Text PDFVariational autoencoders (VAEs) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways.
View Article and Find Full Text PDFMater Sociomed
January 2024
Department of Preschool Education Sciences and Edicational Design, University of the Aegean.
Background: Sexual education of adolescents with autism spectrum disorder (ASD) is a complex challenge, as the lack of specialized programs limits effective learning. Adolescents with ASD have difficulty understanding abstract concepts such as consent, personal boundaries and safety, which increases the risk of exploitation.
Objective: This study seeks to examine the experiences and challenges parents face in providing sexuality education to their children with ASD, highlighting the need for programs that respond to the particular needs of these adolescents.
Digit Health
December 2024
School of Computer Science, University of Birmingham, Birmingham, UK.
Objective: The study aims to present an active learning approach that automatically extracts clinical concepts from unstructured data and classifies them into explicit categories such as Problem, Treatment, and Test while preserving high precision and recall and demonstrating the approach through experiments using i2b2 public datasets.
Methods: Initially labeled data are acquired from a lexical-based approach in sufficient amounts to perform an active learning process. A contextual word embedding similarity approach is adopted using BERT base variant models such as ClinicalBERT, DistilBERT, and SCIBERT to automatically classify the unlabeled clinical concept into explicit categories.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!