Tactile speech aids, though extensively studied in the 1980's and 1990's, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5974558PMC
http://dx.doi.org/10.3389/fpsyg.2018.00767DOI Listing

Publication Analysis

Top Keywords

perceptual integration
16
tactile
16
integration tactile
16
tactile stimulus
12
tactile speech
8
speech aids
8
aids failed
8
auditory
8
tactile auditory
8
true perceptual
8

Similar Publications

In response to the current situation of backward automation levels, heavy labor intensities, and high accident rates in the underground coal mine auxiliary transportation system, the mining trackless auxiliary transportation robot (MTATBOT) is presented in this paper. The MTATBOT is specially designed for long-range, space-constrained, and explosion-proof underground coal mine environments. With an onboard perception and autopilot system, the MTATBOT can perform automated and unmanned subterranean material transportation.

View Article and Find Full Text PDF

Fixational eye movements and edge integration in lightness perception.

Vision Res

January 2025

Department of Psychology, University of Nevada, Reno, NV 89557, United States.

A neural theory of human lightness computation is described and computer-simulated. The theory proposes that lightness is derived from transient ON and OFF cell responses in the early visual pathways that have different characteristic neural gains and that are generated by fixational eye movements (FEMs) as the eyes transit luminance edges in the image. The ON and OFF responses are combined with corollary discharge signals that encode the eye movement direction to create directionally selective ON and OFF responses.

View Article and Find Full Text PDF

Current neural network models of primate vision focus on replicating overall levels of behavioral accuracy, often neglecting perceptual decisions' rich, dynamic nature. Here, we introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We describe an approximation that allows us to constrain the number of time steps an RNN takes to solve a task with human RTs.

View Article and Find Full Text PDF

Automatic image generation and stage prediction of breast cancer immunobiological through a proposed IHC-GAN model.

BMC Med Imaging

January 2025

Electronics and Communications, Arab Academy for Science, Heliopolis, Cairo, 2033, Egypt.

Invasive breast cancer diagnosis and treatment planning require an accurate assessment of human epidermal growth factor receptor 2 (HER2) expression levels. While immunohistochemical techniques (IHC) are the gold standard for HER2 evaluation, their implementation can be resource-intensive and costly. To reduce these obstacles and expedite the procedure, we present an efficient deep-learning model that generates high-quality IHC-stained images directly from Hematoxylin and Eosin (H&E) stained images.

View Article and Find Full Text PDF

Background: The ongoing COVID-19 pandemic and the current shortage of speech-language pathologists in Thailand have limited access to speech services for children with cleft palate with or without cleft lip (CP± L). A combination of telepractice (TP) and face-to-face therapy could address the lack of continuous service and improve accessibility to speech therapy providers. This study aimed to compare the percentage of consonants correct (PCC) before and after speech therapy in children with CP± L.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!