There are over 466 million people in the world with disabling hearing loss. People with severe-to-profound hearing impairment need to lipread or use sign language, even with hearing aids. Assistive Technologies play a vital role in helping these people interact efficiently with their environment. Deaf drivers are not currently able to take full advantage of voice-based navigation applications. In this paper, we describe research that is aimed at developing an assistive device that (1) recognizes voice-stream navigation instructions from GPS-based navigation applications, and (2) maps each voiced navigation instruction to a vibrotactile stimulus that can be perceived and understood by deaf drivers. A 13-element feature vector is extracted from each voice stream, and classified into one of six categories, where each category represents a unique navigation instruction. The classification of the feature vectors is done using a K-Nearest-Neighbor classifier (with an accuracy of 99.05%) which was found to outperform five other classifiers. Each category is then mapped to a unique vibration pattern, which drives vibration motors in real time. A usability study was conducted with ten participants. Three different alternatives were tested, to find the best body locations for mounting the vibration motors. The solution ultimately chosen was two sets of five vibrator motors, where each set was mounted on a bracelet. Ten drivers were asked to rate the proposed device (based on eight different factors) after they used the assistive device on 8 driving routes. The overall mean rating across all eight factors was 4.67 (out of 5) This indicates that the proposed assistive device was seen as useful and effective.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/10400435.2020.1712499 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!