Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Objective: Visual perception decoding plays an important role in understanding our visual systems. Recent functional magnetic resonance imaging (fMRI) studies have made great advances in predicting the visual content of the single stimulus from the evoked response. In this work, we proposed a novel framework to extend previous works by simultaneously decoding the temporal and category information of visual stimuli from fMRI activities.
Approach: 3 T fMRI data of five volunteers were acquired while they were viewing five categories of natural images with random presentation intervals. For each subject, we trained two classification-based decoding modules that were used to identify the occurrence time and semantic categories of the visual stimuli. In each module, we adopted recurrent neural network (RNN), which has proven to be highly effective for learning nonlinear representations from sequential data, for the analysis of the temporal dynamics of fMRI activity patterns. Finally, we integrated the two modules into a complete framework.
Main Results: The proposed framework shows promising decoding performance. The average decoding accuracy across five subjects was over 19 times the chance level. Moreover, we compared the decoding performance of the early visual cortex (eVC) and the high-level visual cortex (hVC). The comparison results indicated that both eVC and hVC participated in processing visual stimuli, but the semantic information of the visual stimuli was mainly represented in hVC.
Significance: The proposed framework advances the decoding of visual experiences and facilitates a better understanding of our visual functions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1088/1741-2552/abb691 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!