Communication is an important part of our daily interactions; however, communication can be hindered, either through visual or auditory impairment, or because usual communication channels are overloaded. When standard communication channels are not available, our sense of touch offers an alternative sensory modality for transmitting messages. Multi-sensory haptic cues that combine multiple types of haptic sensations have shown promise for applications, such as haptic communication, that require large discrete cue sets while maintaining a small, wearable form factor. This article presents language transmission using a multi-sensory haptic device that occupies a small footprint on the upper arm. In our approach, phonemes are encoded as multisensory haptic cues consisting of vibration, radial squeeze, and lateral skin stretch components. Participants learned to identify haptically transmitted phonemes and words after training across a four day training period. A subset of our participants continued their training to extend word recognition free response. Participants were able to identify words after four days using multiple choice with an accuracy of 89% and after eight days using free response with an accuracy of 70%. These results show promise for the use of multisensory haptics for haptic communication, demonstrating high word recognition performance with a small, wearable device.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TOH.2020.3009581 | DOI Listing |
Sci Rep
December 2024
University of Passau, Chair for Multilingual Computerlinguistics, 94032, Passau, Germany.
We present a novel approach for testing genealogical relations between language families. Our method, which has previously only been applied to closely related languages, makes predictions for cognate reflexes based on the regularity of proposed sound correspondences between language families that are hypothesized to be related. We test the hypothesis about a genealogical relation between Panoan and Takanan, two linguistic families of the Amazon.
View Article and Find Full Text PDFEar Hear
December 2024
Office of Research in Clinical Amplification, WS Audiology, Lisle, Illinois, USA.
Objectives: To evaluate whether hearing aid directivity based on multistream architecture (MSA) might enhance the mismatch negativity (MMN) evoked by phonemic contrasts in noise.
Design: Single-blind within-subjects design. Fifteen older adults (mean age = 72.
Heliyon
December 2024
Remote Sensing Unit, Electrical Engineering Department. Northern Border University, Arar, Saudi Arabia.
This research paper investigates the application of Genetic Algorithms (GA) in optimizing Artificial Neural Networks (ANN) for phoneme recognition. The study examines the formalism of GA, their parameters, and operators, and describes the genetic strategy adopted for phoneme recognition using the TIMIT sound database. The paper presents the outcomes of experiments conducted on the phonemes of the test base and the DR1 dialect learning base of the TIMIT base, and compares the recognition rates obtained by learning and testing with those guided by Self-Organizing Map (SOM) experiments.
View Article and Find Full Text PDFDiagnostics (Basel)
November 2024
College of Medicine, National Chung Hsing University, Taichung 402202, Taiwan.
Dysarthria, a motor speech disorder caused by neurological damage, significantly hampers speech intelligibility, creating communication barriers for affected individuals. Voice conversion (VC) systems have been developed to address this, yet accurately predicting phonemes in dysarthric speech remains a challenge due to its variability. This study proposes a novel approach that integrates Fuzzy Expectation Maximization (FEM) with diffusion models for enhanced phoneme prediction, aiming to improve the quality of dysarthric voice conversion.
View Article and Find Full Text PDFTrends Hear
December 2024
Université de Lorraine, CNRS, Inria, Loria, Nancy, France.
In the intricate acoustic landscapes where speech intelligibility is challenged by noise and reverberation, multichannel speech enhancement emerges as a promising solution for individuals with hearing loss. Such algorithms are commonly evaluated at the utterance scale. However, this approach overlooks the granular acoustic nuances revealed by phoneme-specific analysis, potentially obscuring key insights into their performance.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!