The behavioural nature of pure-tone audiometry (PTA) limits those who can participate in the test, and therefore those who can access accurate hearing threshold measurements. Event Related Potentials (ERPs) from brain signals has shown limited utility on adult subjects, and a neural response that can consistently be identified as a result of pure-tone auditory stimulus has yet to be identified. The in doing so challenge is worsened by the nature of PTA, where stimulus amplitude decrease to a patient's lower threshold of hearing. We investigate whether EEGNet, a compact Convolutional Neural Network, could help in this domain. We trained EEGNet on a dataset collected whilst patients underwent a test designed to mimic a pure-tone audiogram, then assessed EEGNet performance in the detection task. For comparison, we also trained Support Vector Machines (SVMs) and Common Spatial Patterns + Linear Discriminant Analysis (CSPLDA) on the same task, with the same training paradigms. The results show that EEGNet is capable of detecting hearing events with 81.5% accuracy on unseen participants, outperforming SVMs by just over 5%. Whilst EEGNet outperformed SVMs and CSPLDA, it did not, however, always show a statistically significant improvement. Further analysis of EEGNet predictions revealed that, with sufficient test repetition, EEGNet has the potential to accurately ascertain hearing thresholds. The implication of these results is for a brain-signal based hearing test for those with physical or mental disabilities that limit their participation in a PTA. While this research is promising, future research will be needed to address the complexity of test setup, the duration of testing, and to further improve accuracy.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC40787.2023.10340112DOI Listing

Publication Analysis

Top Keywords

eegnet
8
test
5
hearing
5
hear that?
4
that? detecting
4
detecting auditory
4
auditory events
4
events eegnet
4
eegnet behavioural
4
behavioural nature
4

Similar Publications

Counting on AR: EEG responses to incongruent information with real-world context.

Comput Biol Med

December 2024

Know Center Research GmbH, Graz, Austria; Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria.

Augmented Reality (AR) technologies enhance the real world by integrating contextual digital information about physical entities. However, inconsistencies between physical reality and digital augmentations, which may arise from errors in the visualized information or the user's mental context, can considerably impact user experience. This work characterizes the brain dynamics associated with processing incongruent information within an AR environment.

View Article and Find Full Text PDF

Frontal EEG correlation based human emotion identification and classification.

Phys Eng Sci Med

November 2024

Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology Madras, Chennai, India.

Humans express their feelings and intentions of their actions or communication through emotions. Recent advancements in technology involve machines in human communication in day-to-day life. Thus, understanding of human emotions by machines will be very helpful in assisting the user in a far better way.

View Article and Find Full Text PDF

Prototype-based methods in deep learning offer interpretable explanations for decisions by comparing inputs to typical representatives in the data. This study explores the adaptation of SESM, a self-attention-based prototype method successful in electrocardiogram (ECG) tasks, for electroencephalogram (EEG) signals. The architecture is evaluated on sleep stage classification, exploring its efficacy in predicting stages with single-channel EEG.

View Article and Find Full Text PDF

Objective: Event-related potentials (ERPs) reflect electropotential changes within specific cortical regions in response to specific events or stimuli during cognitive processes. The P300 speller is an important application of ERP-based brain-computer interfaces (BCIs), offering potential assistance to individuals with severe motor disabilities by decoding their electroencephalography (EEG) to communicate.

Methods: This study introduced a novel speller paradigm using a dynamically growing bubble (GB) visualization as the stimulus, departing from the conventional flash stimulus (TF).

View Article and Find Full Text PDF

Motor imagery electroencephalography (EEG) analysis is crucial for the development of effective brain-computer interfaces (BCIs), yet it presents considerable challenges due to the complexity of the data and inter-subject variability. This paper introduces EEGCCT, an application of compact convolutional transformers designed specifically to improve the analysis of motor imagery tasks in EEG. Unlike traditional approaches, EEGCCT model significantly enhances generalization from limited data, effectively addressing a common limitation in EEG datasets.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!