A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Transformers and cortical waves: encoders for pulling in context across time. | LitMetric

Transformers and cortical waves: encoders for pulling in context across time.

Trends Neurosci

Computational Neurobiology Laboratory, Salk Institute for Biological Studies, San Diego, CA, USA; Department of Neurobiology, University of California at San Diego, San Diego, CA, USA. Electronic address:

Published: October 2024

AI Article Synopsis

  • Transformer networks, like ChatGPT, use a computational mechanism that converts input sequences into long 'encoding vectors' to learn relationships across text, enhancing performance.
  • The 'self-attention' mechanism helps identify connections between words in a sequence, allowing the model to understand context over longer distances.
  • The authors propose that similar encoding principles may exist in the brain, where neural activity patterns could extract temporal context from sensory inputs, paralleling how transformers operate.

Article Abstract

The capabilities of transformer networks such as ChatGPT and other large language models (LLMs) have captured the world's attention. The crucial computational mechanism underlying their performance relies on transforming a complete input sequence - for example, all the words in a sentence - into a long 'encoding vector' that allows transformers to learn long-range temporal dependencies in naturalistic sequences. Specifically, 'self-attention' applied to this encoding vector enhances temporal context in transformers by computing associations between pairs of words in the input sequence. We suggest that waves of neural activity traveling across single cortical areas, or multiple regions on the whole-brain scale, could implement a similar encoding principle. By encapsulating recent input history into a single spatial pattern at each moment in time, cortical waves may enable a temporal context to be extracted from sequences of sensory inputs, the same computational principle as that used in transformers.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.tins.2024.08.006DOI Listing

Publication Analysis

Top Keywords

cortical waves
8
input sequence
8
temporal context
8
transformers
4
transformers cortical
4
waves encoders
4
encoders pulling
4
pulling context
4
context time
4
time capabilities
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!