How do deaf and deafblind individuals process touch? This question offers a unique model to understand the prospects and constraints of neural plasticity. Our brain constantly receives and processes signals from the environment and combines them into the most reliable information content. The nervous system adapts its functional and structural organization according to the input, and perceptual processing develops as a function of individual experience.
View Article and Find Full Text PDFThe impact of congenital deafness on the development of vision has been investigated to a considerable degree. However, whether multisensory processing is affected by auditory deprivation has often remained largely overlooked. To fill this gap, we investigated the consequences of a profound auditory deprivation from birth on visuo-tactile processing.
View Article and Find Full Text PDFAreas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task.
View Article and Find Full Text PDFBilinguals, both hearing and deaf, activate multiple languages simultaneously even in contexts that require only one language. To date, the point in development at which bilingual signers experience cross-language activation of a signed and a spoken language remains unknown. We investigated the processing of written words by ASL-English bilingual deaf middle school students.
View Article and Find Full Text PDFIn humans, face-processing relies on a network of brain regions predominantly in the right occipito-temporal cortex. We tested congenitally deaf (CD) signers and matched hearing controls (HC) to investigate the experience dependence of the cortical organization of face processing. Specifically, we used EEG frequency-tagging to evaluate: (1) Face-Object Categorization, (2) Emotional Facial-Expression Discrimination and (3) Individual Face Discrimination.
View Article and Find Full Text PDFEmotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions).
View Article and Find Full Text PDFSeveral studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured.
View Article and Find Full Text PDFDeaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar.
View Article and Find Full Text PDF