The capacity to decode kinematics of intended movement from neural activity is necessary for the development of neuromotor prostheses such as smart artificial arms. Thus far, most of the progress in the development of neuromotor prostheses has been achieved by decoding kinematics of the hand from intracranial neural activity. The comparatively low signal-to-noise ratio and spatial resolution of neural data acquired non-invasively from the scalp via electroencephalography (EEG) have been presumed to prohibit the extraction of detailed information about hand kinematics. Here, we challenge this presumption by attempting to continuously decoding hand position, velocity, and acceleration from 55-channel EEG signals acquired during three-dimensional center-out reaching from five subjects. To preserve ecological validity, reaches were self-initiated, and targets were self-selected. After cross-validation, the overall mean correlation coefficients between measured and reconstructed position, velocity, and acceleration were 0.2, 0.3, and 0.3 respectively. These modest results support the continued development of non-invasive neuromotor prostheses for movement-impaired individuals.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/IEMBS.2009.5334606 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!