Reading comprehension, a fundamental cognitive ability essential for knowledge acquisition, is a complex skill, with a notable number of learners lacking proficiency in this domain. This study introduces innovative tasks for Brain-Computer Interface (BCI), predicting the relevance of words or tokens read by individuals to the target inference words. We use state-of-the-art Large Language Models (LLMs) to guide a new reading embedding representation in training. This representation, integrating EEG and eye-tracking biomarkers through an attention-based transformer encoder, achieved a mean 5-fold cross-validation accuracy of 68.7% across nine subjects using a balanced sample, with the highest single-subject accuracy reaching 71.2%. This study pioneers the integration of LLMs, EEG, and eye-tracking for predicting human reading comprehension at the word level. We fine-tune the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model for word embedding, devoid of information about the reading tasks. Despite this absence of task-specific details, the model effortlessly attains an accuracy of 92.7%, thereby validating our findings from LLMs. This work represents a preliminary step toward developing tools to assist reading. The code and data are available in github.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC53108.2024.10781627 | DOI Listing |
Cogn Neurodyn
December 2025
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-shi, Tokyo, 184-8588 Japan.
Unlabelled: Face masks became a part of everyday life during the SARS-CoV-2 pandemic. Previous studies showed that the face cognition mechanism involves holistic face processing, and the absence of face features could lower the cognition ability. This is opposed to the experience during the pandemic, when people could correctly recognize faces, although the mask covered a part of the face.
View Article and Find Full Text PDFeNeuro
March 2025
Cognitive, Linguistic, and Psychological Sciences, Brown university, Providence, RI, U.S.A.
Anterior-posterior interactions in the alpha band (8-12 Hz) have been implicated in a variety of functions including perception, attention, and working memory. The underlying neural communication can be flexibly controlled by adjusting phase relations when activities across anterior-posterior regions oscillate at a matched frequency. We thus investigated how alpha oscillation frequencies spontaneously converged along anterior-posterior regions by tracking oscillatory EEG activity while participants rested.
View Article and Find Full Text PDFSci Rep
March 2025
Department of Automotive Technologies, Budapest University of Technology and Economics, Budapest, Hungary.
While using fully autonomous vehicles is expected to radically change the way we live our daily lives, it is not yet available in most parts of the world, so we only have sporadic results on passenger reactions. Furthermore, we have very limited insights into how passengers react to an unexpected event during the ride. Previous physiological research has shown that passengers have lower levels of anxiety in the event of a human-driven condition compared to a self-driving condition.
View Article and Find Full Text PDFPsych J
March 2025
Department of Psychology, University of Bonn, Bonn, Germany.
This EEG and eye-tracking study investigated affective influences on cognitive preparation using a precued pro-/antisaccade task with emotional faces as cues. Negative information interfered with preparatory processes with high but not low executive function load.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2024
Effective emotion recognition is vital for human interaction and has an impact on several fields such as psychology, social sciences, human-computer interaction, and emotional artificial intelligence. This study centers on the innovative contribution of a novel Myanmar emotion dataset to enhance emotion recognition technology in diverse cultural contexts. Our unique dataset is derived from a carefully designed emotion elicitation paradigm, using 15 video clips per session for three emotions (positive, neutral, and negative), with five clips per emotion.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!