Background: Deficits in Multisensory Integration (MSI) in ASD have been reported repeatedly and have been suggested to be caused by altered long-range connectivity. Here we investigate behavioral and ERP correlates of MSI in ASD using ecologically valid videos of emotional expressions.
Methods: In the present study, we set out to investigate the electrophysiological correlates of audiovisual MSI in young autistic and neurotypical adolescents.
Recognising a speaker's identity by the sound of their voice is important for successful interaction. The skill depends on our ability to discriminate minute variations in the acoustics of the vocal signal. Performance on voice identity assessments varies widely across the population.
View Article and Find Full Text PDFThere are remarkable individual differences in the ability to recognise individuals by the sound of their voice. Theoretically, this ability is thought to depend on the coding accuracy of voices in a low-dimensional "voice-space". Here we were interested in how adaptive coding of voice identity relates to this variability in skill.
View Article and Find Full Text PDFSensory processing deficits and altered long-range connectivity putatively underlie Multisensory Integration (MSI) deficits in Autism Spectrum Disorder (ASD). The present study set out to investigate non-social MSI stimuli and their electrophysiological correlates in young neurotypical adolescents and adolescents with ASD. We report robust MSI effects at behavioural and electrophysiological levels.
View Article and Find Full Text PDFAuditory agnosia is an inability to make sense of sound that cannot be explained by deficits in low-level hearing. In view of recent promising results in the area of neurorehabilitation of language disorders after stroke, we examined the effect of transcranial direct current stimulation (tDCS) in a young woman with general auditory agnosia caused by traumatic injury to the left inferior colliculus. Specifically, we studied activations to sound embedded in a block design using functional magnetic resonance imaging before and after application of anodal tDCS to the right auditory cortex.
View Article and Find Full Text PDFRecognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia).
View Article and Find Full Text PDFSoc Cogn Affect Neurosci
August 2017
Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development.
View Article and Find Full Text PDFOur vocal tone--the prosody--contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a "yes" may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax, and semantics.
View Article and Find Full Text PDFObjective: To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi.
View Article and Find Full Text PDFfMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard 'localizers' available in the visual domain. Here we present an fMRI 'voice localizer' scan allowing rapid and reliable localization of the voice-sensitive 'temporal voice areas' (TVA) of human auditory cortex.
View Article and Find Full Text PDFMagneto-encephalography (MEG) was used to examine the cerebral response to affective non-verbal vocalizations (ANVs) at the single-subject level. Stimuli consisted of non-verbal affect bursts from the Montreal Affective Voices morphed to parametrically vary acoustical structure and perceived emotional properties. Scalp magnetic fields were recorded in three participants while they performed a 3-alternative forced choice emotion categorization task (Anger, Fear, Pleasure).
View Article and Find Full Text PDFSuccessful social interaction hinges on accurate perception of emotional signals. These signals are typically conveyed multi-modally by the face and voice. Previous research has demonstrated uni-modal contrastive aftereffects for emotionally expressive faces or voices.
View Article and Find Full Text PDFAccents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored.
View Article and Find Full Text PDFThe human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition.
View Article and Find Full Text PDFBinge drinking is now considered a central public health issue and is associated with emotional and interpersonal problems, but the neural implications of these deficits remain unexplored. The present study aimed at offering the first insights into the effects of binge drinking on the neural processing of vocal affect. On the basis of an alcohol-consumption screening phase (204 students), 24 young adults (12 binge drinkers and 12 matched controls, mean age: 23.
View Article and Find Full Text PDFListeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice-a key requirement for social interactions. The human brain contains temporal voice areas (TVA) involved in an acoustic-based representation of voice identity, but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism.
View Article and Find Full Text PDFThe "temporal voice areas" (TVAs; Belin et al., 2000) of the human brain show greater neuronal activity in response to human voices than to other categories of non-vocal sounds. However, a direct link between TVA activity and voice perception behavior has not yet been established.
View Article and Find Full Text PDFAmplitude reduction of the P300 event-related potential has long been suggested as a marker for schizophrenia. However, recent research has shown that this reduction in the P300 amplitude is not specific to schizophrenia as it can also be observed in related illnesses such as bipolar disorder. Due to this lack of specificity the P300 elicited using traditional oddball paradigms may be a less valuable endophenotypic marker.
View Article and Find Full Text PDFVoices carry large amounts of socially relevant information on persons, much like 'auditory faces'. Following Bruce and Young (1986)'s seminal model of face perception, we propose that the cerebral processing of vocal information is organized in interacting but functionally dissociable pathways for processing the three main types of vocal information: speech, identity, and affect. The predictions of the 'auditory face' model of voice perception are reviewed in the light of recent clinical, psychological, and neuroimaging evidence.
View Article and Find Full Text PDFSocial interactions involve more than "just" language. As important is a more primitive nonlinguistic mode of communication acting in parallel with linguistic processes and driving our decisions to a much higher degree than is generally suspected. Amongst the "honest signals" that influence our behavior is perceived vocal attractiveness.
View Article and Find Full Text PDFPrevious research has demonstrated perceptual aftereffects for emotionally expressive faces, but the extent to which they can also be obtained in a different modality is unknown. In two experiments we show for the first time that adaptation to affective, non-linguistic vocalisations elicits significant auditory aftereffects. Adaptation to angry vocalisations caused voices drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful, while adaptation to fearful vocalisations elicited opposite aftereffects (Experiment 1).
View Article and Find Full Text PDFWe investigated the effects of adaptation to mouth shapes associated with different spoken sounds (sustained /m/ or /u/) on visual perception of lip speech. Participants were significantly more likely to label ambiguous faces on an /m/-to-/u/ continuum as saying /u/ following adaptation to /m/ mouth shapes than they were in a preadaptation test. By contrast, participants were significantly less likely to label the ambiguous faces as saying /u/ following adaptation to /u/ mouth shapes than they were in a preadaptation test.
View Article and Find Full Text PDFIt has been proposed that psychophysiological abnormalities in schizophrenia, such as decreased amplitude of the evoked potential component P300, may be genetically influenced. Studies of heritability of the P300 have used different and typically more complex tasks than those used in clinical studies of schizophrenia. Here we present data on P300 parameters on the same set of auditory and visual tasks in samples of twins, and patients with schizophrenia or bipolar disorder to examine the P300 as a possible endophenotype.
View Article and Find Full Text PDFWhile it is well established that different neural populations code different face views, behavioural evidence that these neurons also code other aspects of face shape is equivocal. For example, previous studies have interpreted the partial transfer of face aftereffects across different viewpoints as evidence for either view-specific coding of face shape or that the locus of adaptation is in face-coding mechanisms that are relatively robust to changes in face view. Here we show that it is possible to simultaneously induce aftereffects in opposite directions for 3/4 and front views of upright faces with manipulated mouth position (experiment 1).
View Article and Find Full Text PDF