Abnormalities in the integration of auditory and visual language inputs could underlie many core psychotic features. Perceptual confusion may arise because of the normal propensity of visual speech perception to evoke auditory percepts. Recent functional neuroimaging studies of normal subjects have demonstrated activation in auditory-linguistic brain areas in response to silent lip-reading. Three functional magnetic resonance imaging experiments were carried out on seven normal volunteers, and 14 schizophrenia patients, half of whom were actively psychotic. The tasks involved listening to auditory speech, silent lip-reading (visual speech), and perception of meaningless lip movements (visual non-speech). Subjects also undertook a behavioural study of audio-visual word identification designed to evoke perceptual fusions. Patients and controls both showed susceptibility to audio-visual fusions on the behavioural task. The patient group as a whole showed less activation relative to controls in superior and inferior posterior temporal areas while performing the silent lip-reading task. Attending to visual non-speech, the patients activated less posterior (occipito-temporal) and more anterior (frontal, insular and striatal) brain areas than controls. This difference was accounted for largely by the psychotic subgroup. Insular and striatal areas were also activated in both subject groups in the auditory speech perception condition, thus demonstrating the bimodal sensitivity of these regions. The results suggest that schizophrenia patients with psychotic symptoms respond to visually ambiguous stimuli (non-speech) by activation of polysensory structures. This could reflect particular processing strategies and may increase susceptibility to certain paranoid and hallucinatory symptoms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/s0925-4927(00)00081-0 | DOI Listing |
J Acoust Soc Am
January 2025
USC Viterbi School of Engineering, University of Southern California, Los Angeles, California 90089-1455, USA.
Voice quality serves as a rich source of information about speakers, providing listeners with impressions of identity, emotional state, age, sex, reproductive fitness, and other biologically and socially salient characteristics. Understanding how this information is transmitted, accessed, and exploited requires knowledge of the psychoacoustic dimensions along which voices vary, an area that remains largely unexplored. Recent studies of English speakers have shown that two factors related to speaker size and arousal consistently emerge as the most important determinants of quality, regardless of who is speaking.
View Article and Find Full Text PDFAudiol Res
January 2025
Otolaryngology Unit, Department of Traslational Medicine and Neuroscience-DiBrain, University of Bari, 70124 Bari, Italy.
Aim: The aim of this study was to assess the subjective experiences of adults with different cochlear implant (CI) configurations-unilateral cochlear implant (UCI), bilateral cochlear implant (BCI), and bimodal stimulation (BM)-focusing on their perception of speech in quiet and noisy environments, music, environmental sounds, people's voices and tinnitus.
Methods: A cross-sectional survey of 130 adults who had undergone UCI, BCI, or BM was conducted. Participants completed a six-item online questionnaire, assessing difficulty levels and psychological impact across auditory domains, with responses measured on a 10-point scale.
Codas
January 2025
Departamento de Fonoaudiologia, Universidade Federal de Minas Gerais - UFMG - Belo Horizonte (MG), Brasil.
Purpose: This study investigated the association between self-perception of stuttering and self-perception of hearing, speech fluency profile, and contextual aspects in Brazilian adults who stutter.
Methods: Fifty-five adults who stutter (ages 18 to 58 years), speakers of Brazilian Portuguese speakers, participated in an observational study that included: (a) a clinical history survey to collect identification, sociodemographic, clinical, and assistance data; (b) the Brazil Economic Classification Criteria (CCEB); (c) a hearing self-perception questionnaire (Speech, Spatial and Qualities of Hearing Scale - SSQ, version 5.6); (d) self-perception of the impact of stuttering (Brazilian Portuguese version of the Overall Assessment of the Speaker's Experience of Stuttering - Adults - OASES-A); and (e) an assessment of speech fluency (Fluency Profile Assessment Protocol -- PAPF).
J Acoust Soc Am
January 2025
Dyson School of Design Engineering, Imperial College London, SW7 2DB London, United Kingdom.
To date, there is strong evidence indicating that humans with normal hearing can adapt to non-individual head-related transfer functions (HRTFs). However, less attention has been given to studying the generalization of this adaptation to untrained conditions. This study investigated how adaptation to one set of HRTFs can generalize to another set of HRTFs.
View Article and Find Full Text PDFJ Neurosci
January 2025
Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR 97239, USA
In everyday hearing, listeners face the challenge of understanding behaviorally relevant foreground stimuli (speech, vocalizations) in complex backgrounds (environmental, mechanical noise). Prior studies have shown that high-order areas of human auditory cortex (AC) pre-attentively form an enhanced representation of foreground stimuli in the presence of background noise. This enhancement requires identifying and grouping the features that comprise the background so they can be removed from the foreground representation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!