Characterization of deep neural network features by decodability from human brain activity.

Sci Data

ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika, Soraku, Kyoto 619-0288, Japan.

Published: February 2019

Achievements of near human-level performance in object recognition by deep neural networks (DNNs) have triggered a flood of comparative studies between the brain and DNNs. Using a DNN as a proxy for hierarchical visual representations, our recent study found that human brain activity patterns measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into DNN feature values given the same inputs. However, not all DNN features are equally decoded, indicating a gap between the DNN and human vision. Here, we present a dataset derived from DNN feature decoding analyses, which includes fMRI signals of five human subjects during image viewing, decoded feature values of DNNs (AlexNet and VGG19), and decoding accuracies of individual DNN features with their rankings. The decoding accuracies of individual features were highly correlated between subjects, suggesting the systematic differences between the brain and DNNs. We hope the present dataset will contribute to revealing the gap between the brain and DNNs and provide an opportunity to make use of the decoded features for further applications.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6371890PMC
http://dx.doi.org/10.1038/sdata.2019.12DOI Listing

Publication Analysis

Top Keywords

brain dnns
12
deep neural
8
human brain
8
brain activity
8
dnn feature
8
feature values
8
dnn features
8
decoding accuracies
8
accuracies individual
8
dnn
6

Similar Publications

A computational deep learning investigation of animacy perception in the human brain.

Commun Biol

December 2024

Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium.

The functional organization of the human object vision pathway distinguishes between animate and inanimate objects. To understand animacy perception, we explore the case of zoomorphic objects resembling animals. While the perception of these objects as animal-like seems obvious to humans, such "Animal bias" is a striking discrepancy between the human brain and deep neural networks (DNNs).

View Article and Find Full Text PDF

Intrinsic plasticity coding improved spiking actor network for reinforcement learning.

Neural Netw

December 2024

School of Artificial Intelligence, Anhui University, Hefei, 230601, Anhui, China; Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education, Hefei, 230601, Anhui, China; Anhui Provincial Engineering Research Center for Unmanned Systems and Intelligent Technology, Hefei, 230601, Anhui, China; School of Automation, Southeast University, Nanjing, 211189, Jiangsu, China. Electronic address:

Deep reinforcement learning (DRL) exploits the powerful representational capabilities of deep neural networks (DNNs) and has achieved significant success. However, compared to DNNs, spiking neural networks (SNNs), which operate on binary signals, more closely resemble the biological characteristics of efficient learning observed in the brain. In SNNs, spiking neurons exhibit complex dynamic characteristics and learn based on principles of biological plasticity.

View Article and Find Full Text PDF

The ability to coactivate (or "superpose") multiple conceptual representations is a fundamental function that we constantly rely upon; this is crucial in complex cognitive tasks requiring multi-item working memory, such as mental arithmetic, abstract reasoning, and language comprehension. As such, an artificial system aspiring to implement any of these aspects of general intelligence should be able to support this operation. I argue here that standard, feed-forward deep neural networks (DNNs) are unable to implement this function, whereas an alternative, fully brain-constrained class of neural architectures spontaneously exhibits it.

View Article and Find Full Text PDF

The use of MRI analysis for BTD and tumor type detection has considerable importance within the domain of machine vision. Numerous methodologies have been proposed to address this issue, and significant progress has been achieved in this domain via the use of deep learning (DL) approaches. While the majority of offered approaches using artificial neural networks (ANNs) and deep neural networks (DNNs) demonstrate satisfactory performance in Bayesian Tree Descent (BTD), none of these research studies can ensure the optimality of the employed learning model structure.

View Article and Find Full Text PDF

TransferGWAS of T1-weighted brain MRI data from UK Biobank.

PLoS Genet

December 2024

Digital Health Machine Learning, Hasso Plattner Institute for Digital Engineering, University of Potsdam, Germany.

Genome-wide association studies (GWAS) traditionally analyze single traits, e.g., disease diagnoses or biomarkers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!