Rapid recognition of voluntary motions is crucial in human-computer interaction, but few studies compare the predictive abilities of different sensing technologies. This paper thus compares performances of different technologies when predicting targets of human reaching motions: electroencephalography (EEG), electrooculography, camera-based eye tracking, electromyography (EMG), hand position, and the user's preferences. Supervised machine learning is used to make predictions at different points in time (before and during limb motion) with each individual sensing modality. Different modalities are then combined using an algorithm that takes into account the different times at which modalities provide useful information. Results show that EEG can make predictions before limb motion onset, but requires subject-specific training and exhibits decreased performance as the number of possible targets increases. EMG and hand position give high accuracy, but only once the motion has begun. Eye tracking is robust and exhibits high accuracy at the very onset of limb motion. Several advantages of combining different modalities are also shown, including advantages of combining measurements with contextual data. Finally, some recommendations are given for sensing modalities with regard to different criteria and applications. The information could aid human-computer interaction designers in selecting and evaluating appropriate equipment for their applications.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TBME.2013.2262455 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!