Single-trial classification of vowel speech imagery using common spatial patterns.

Neural Netw

Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan.

Published: November 2009

With the goal of providing a speech prosthesis for individuals with severe communication impairments, we propose a control scheme for brain-computer interfaces using vowel speech imagery. Electroencephalography was recorded in three healthy subjects for three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Trial averages revealed readiness potentials at 200 ms after stimulus and speech related potentials peaking after 350 ms. Spatial filters optimized for task discrimination were designed using the common spatial patterns method, and the resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68% to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2009.05.008DOI Listing

Publication Analysis

Top Keywords

vowel speech
12
speech imagery
12
common spatial
8
spatial patterns
8
speech prosthesis
8
speech
7
single-trial classification
4
classification vowel
4
imagery common
4
patterns goal
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!