AI Article Synopsis

  • Transformer models like GPT can predict how the human brain responds to language based on functional MRI data from diverse sentences.
  • The study demonstrates that a GPT-based model not only predicts brain responses but also identifies new sentences that can influence activity in the human language network.
  • Key factors like surprisal and well-formedness of sentences significantly impact the strength of responses in the language areas, showcasing the potential of these models to both mimic and affect human language processing.

Article Abstract

Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41562-023-01783-7DOI Listing

Publication Analysis

Top Keywords

human language
16
language network
16
language
9
brain responses
8
drive suppress
8
human
5
network
5
driving suppressing
4
suppressing human
4
network large
4

Similar Publications

Background: TheKeep.Ca was built to facilitate engagement with those experiencing cancer in Manitoba, Canada. Constructed between 2020 and 2024 with a group of patient advisors, the website includes information on engagement activities including research participation, the patient advisor role, and how those experiencing cancer can access these Manitoba activities.

View Article and Find Full Text PDF

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review.

JMIR Res Protoc

January 2025

Data and Web Science Group, School of Business Informatics and Mathematics, University of Manneim, Mannheim, Germany.

Background: The rapid evolution of large language models (LLMs), such as Bidirectional Encoder Representations from Transformers (BERT; Google) and GPT (OpenAI), has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.

View Article and Find Full Text PDF

Purpose: To explore the perceived utility and effect of simplified radiology reports on oncology patients' knowledge and feasibility of large language models (LLMs) to generate such reports.

Materials And Methods: This study was approved by the Institute Ethics Committee. In phase I, five state-of-the-art LLMs (Generative Pre-Trained Transformer-4o [GPT-4o], Google Gemini, Claude Opus, Llama-3.

View Article and Find Full Text PDF

It is well-known that the use of vocabulary in phenotype treatments is often inconsistent. An earlier survey of biologists who create or use phenotypic characters revealed that this lack of standardization leads to ambiguities, frustrating both the consumers and producers of phenotypic data. Such ambiguities are challenging for biologists, and more so for Artificial Intelligence, to resolve.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!