Thinking out loud, an open-access EEG-based BCI dataset for inner speech recognition.

Sci Data

Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina.

Published: February 2022

Surface electroencephalography is a standard and noninvasive way to measure electrical brain activity. Recent advances in artificial intelligence led to significant improvements in the automatic detection of brain patterns, allowing increasingly faster, more reliable and accessible Brain-Computer Interfaces. Different paradigms have been used to enable the human-machine interaction and the last few years have broad a mark increase in the interest for interpreting and characterizing the "inner voice" phenomenon. This paradigm, called inner speech, raises the possibility of executing an order just by thinking about it, allowing a "natural" way of controlling external devices. Unfortunately, the lack of publicly available electroencephalography datasets, restricts the development of new techniques for inner speech recognition. A ten-participant dataset acquired under this and two others related paradigms, recorded with an acquisition system of 136 channels, is presented. The main purpose of this work is to provide the scientific community with an open-access multiclass electroencephalography database of inner speech commands that could be used for better understanding of the related brain mechanisms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8844234PMC
http://dx.doi.org/10.1038/s41597-022-01147-2DOI Listing

Publication Analysis

Top Keywords

inner speech
16
speech recognition
8
thinking loud
4
loud open-access
4
open-access eeg-based
4
eeg-based bci
4
bci dataset
4
inner
4
dataset inner
4
speech
4

Similar Publications

Inner speech refers to the silent production of language in one's mind. As a purely mental action without obvious physical manifestations, inner speech has been notoriously difficult to quantify. Inner speech is thought to be closely related to overt speech.

View Article and Find Full Text PDF

As Artificial Intelligence and Robotics evolve, the ethical implications of autonomous systems are becoming increasingly paramount. This article explores the role of a robot's inner speech in enhancing human phronesis - the capacity for making ethical and contextually appropriate decisions. Phronesis is a complex human trait based on experience, personality, and values, and is crucial for decisions affecting others' well-being.

View Article and Find Full Text PDF

Background: The intraoperative measurements are essential steps in cochlear implant (CI) surgery for confirming correct electrode placement.

Objectives: To examine the intraoperative impedance and electrically evoked action potential (ECAP) measurement results of cochlear implant (CI) users with normal cochlear anatomy (NCA) and to compare them with CI users with inner ear malformations (IEM).

Material And Methods: This retrospective study included intraoperative data of 300 ears from 258 individuals using Medel and Cochlear (Nucleus) CI devices.

View Article and Find Full Text PDF

Thanks to affordable 3D printers, creating complex designs like anatomically accurate dummy heads is now accessible. This study introduces dummy heads with 3D-printed skulls and silicone skins to explore crosstalk cancellation in bone conduction (BC). Crosstalk occurs when BC sounds from a transducer on one side of the head reach the cochlea on the opposite side.

View Article and Find Full Text PDF

Background: Postoperative patients with oral cancer are deeply distressed about their body image. However, their true inner feelings and the factors influencing body image remain unclear.

Aims: This study aims to investigate the experience of body image disturbance in patients 3 months after oral cancer surgery and analyze the influencing factors.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!