Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Enhancing computability of cerebral recordings and connections made with human/non-human brain have been on track and are expected to propel in our current era. An effective contribution towards said ends is improving accuracy of attempts at discerning intricate phenomena taking place within human brain. Here and in two different capacities of experiments, we attempt to distinguish cerebral perceptions shaped and affective states surfaced during observation of samples of media incorporating distinct audio-visual and emotional contents, through employing electroencephalograph/EEG recorded sessions of two reputable datasets of DEAP and SEED. Here we introduce AltSpec(E3) the inceptive form of CollectiveNet intelligent computational architectures employing collective and concurrent multi-spec analysis to exploit complex patterns in complex data-structures. This processing technique uses a full array of diversification protocols with multifarious parts enabling surgical levels of optimization while integrating a holistic analysis of patterns. Data-structures designed here contain multi-electrode neuroinformatic and neurocognitive features studying emotion reactions and attentive patterns. These spatially and temporally featured 2D/3D constructs of domain-augmented data are eventually AI-processed and outputs are defragmented forming one definitive judgement. The media-perception tracing is arguably first of its kind, at least when implemented on mentioned datasets. Backed by this multi-directional approach and in subject-independent configurations for perception-tracing on 5-media-class basis, mean accuracies of 81.00% and 68.93% were obtained on DEAP and SEED, respectively. We also managed to classify emotions with accuracies of 61.59% and 66.21% in cross-dataset validation followed by 81.47% and 88.12% in cross-subject validation settings trained on DEAP and SEED, consecutively.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2023.08.031 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!