Publications by authors named "Vivian Ciaramitaro"

Social anxiety is characterised by fear of negative evaluation and negative perceptual biases; however, the cognitive mechanisms underlying these negative biases are not well understood. We investigated a possible mechanism which could maintain negative biases: altered adaptation to emotional faces. Heightened sensitivity to negative emotions could result from weakened adaptation to negative emotions, strengthened adaptation to positive emotions, or both mechanisms.

View Article and Find Full Text PDF

Sound-shape correspondence refers to the preferential mapping of information across the senses, such as associating a nonsense word like bouba with rounded abstract shapes and kiki with spiky abstract shapes. Here we focused on audio-tactile (AT) sound-shape correspondences between nonsense words and abstract shapes that are felt but not seen. Despite previous research indicating a role for visual experience in establishing AT associations, it remains unclear how visual experience facilitates AT correspondences.

View Article and Find Full Text PDF

Sound-shape crossmodal correspondence, the naturally occurring associations between abstract visual shapes and nonsense sounds, is one aspect of multisensory processing that strengthens across early childhood. Little is known regarding whether school-aged children exhibit other variants of sound-shape correspondences such as audio-tactile (AT) associations between tactile shapes and nonsense sounds. Based on previous research in blind individuals suggesting the role of visual experience in establishing sound-shape correspondence, we hypothesized that children would show weaker AT association than adults and that children's AT association would be enhanced with visual experience of the shapes.

View Article and Find Full Text PDF

Correctly assessing the emotional state of others is a crucial part of social interaction. While facial expressions provide much information, faces are often not viewed in isolation, but occur with concurrent sounds, usually voices, which also provide information about the emotion being portrayed. Many studies have examined the crossmodal processing of faces and sounds, but results have been mixed, with different paradigms yielding different results.

View Article and Find Full Text PDF

While previous research has investigated key factors contributing to multisensory integration in isolation, relatively little is known regarding how these factors interact, especially when considering the enhancement of visual contrast sensitivity by a task-irrelevant sound. Here we explored how auditory stimulus properties, namely salience and temporal phase coherence in relation to the visual target, jointly affect the extent to which a sound can enhance visual contrast sensitivity. Visual contrast sensitivity was measured by a psychophysical task, where human adult participants reported the location of a visual Gabor pattern presented at various contrast levels.

View Article and Find Full Text PDF

One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another.

View Article and Find Full Text PDF

Crossmodal sound-shape correspondence, the association of abstract shapes and nonsense words (e.g., "bouba-kiki" effect), is seen across cultures and languages.

View Article and Find Full Text PDF

We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources).

View Article and Find Full Text PDF

Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry.

View Article and Find Full Text PDF

While some models of how various attributes of a face are processed have posited that face features, invariant physical cues such as gender or ethnicity as well as variant social cues such as emotion, may be processed independently (e.g., Bruce and Young, 1986), other models suggest a more distributed representation and interdependent processing (e.

View Article and Find Full Text PDF

Faced with an overwhelming amount of sensory information, we are able to prioritize the processing of select spatial locations and visual features. The neuronal mechanisms underlying such spatial and feature-based selection have been studied in considerable detail. More recent work shows that attention can also be allocated to objects, even spatially superimposed objects composed of dynamically changing features that must be integrated to create a coherent object representation.

View Article and Find Full Text PDF

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information.

View Article and Find Full Text PDF

We used psychophysical and functional MRI (fMRI) adaptation to examine how and where the visual configural cues underlying identification of facial ethnicity, gender, and identity are processed. We found that the cortical regions showing selectivity to these cues are distributed widely across the inferior occipital cortex, fusiform areas, and the cingulate gyrus. These regions were not colocalized with areas activated by traditional face area localizer scans.

View Article and Find Full Text PDF

Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction.

View Article and Find Full Text PDF

Lesion or inactivation of the superior colliculus (SC) of the cat results in an animal that fails to orient toward peripheral visual stimuli which normally evoke a brisk, reflexive orienting response. A failure to orient toward a visual stimulus could be the result of a sensory impairment (a failure to detect the visual stimulus) or a motor impairment (an inability to generate the orienting response). Either mechanism could explain the deficit observed during SC inactivation since neurons in the SC can carry visual sensory signals as well as motor commands involved in the generation of head and eye movements.

View Article and Find Full Text PDF