Auditory feedback of one's own speech is used to monitor and adaptively control fluent speech production. A new study in PLOS Biology using electrocorticography (ECoG) in listeners whose speech was artificially delayed identifies regions involved in monitoring speech production.
View Article and Find Full Text PDFHumans can easily recognize the motion of living creatures using only a handful of point-lights that describe the motion of the main joints (biological motion perception). This special ability to perceive the motion of animate objects signifies the importance of the spatiotemporal information in perceiving biological motion. The posterior STS (pSTS) and posterior middle temporal gyrus (pMTG) region have been established by many functional neuroimaging studies as a locus for biological motion perception.
View Article and Find Full Text PDFIn spatial perception, visual information has higher acuity than auditory information and we often misperceive sound-source locations when spatially disparate visual stimuli are presented simultaneously. Ventriloquists make good use of this auditory illusion. In this study, we investigated neural substrates of the ventriloquism effect to understand the neural mechanism of multimodal integration.
View Article and Find Full Text PDFBrain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds /r/ and /l/ that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity.
View Article and Find Full Text PDFBehavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action ("Mirror System" properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures.
View Article and Find Full Text PDFAlthough sound reverberation is considered a nuisance variable in most studies investigating auditory processing, it can serve as a cue for loudness constancy, a phenomenon describing constant loudness perception in spite of changing sound source distance. In this study, we manipulated room reverberation characteristics to test their effect on psychophysical loudness constancy and we tested with magnetoencephalography on human subjects for neural responses reflecting loudness constancy. Psychophysically, we found that loudness constancy was present in strong, but not weak, reverberation conditions.
View Article and Find Full Text PDFIn this fMRI study we investigate neural processes related to the action observation network using a complex perceptual-motor task in pilots and non-pilots. The task involved landing a glider (using aileron, elevator, rudder, and dive brake) as close to a target as possible, passively observing a replay of one's own previous trial, passively observing a replay of an expert's trial, and a baseline do nothing condition. The objective of this study is to investigate two types of motor simulation processes used during observation of action: imitation based motor simulation and error-feedback based motor simulation.
View Article and Find Full Text PDFWhen we listen to sounds through headphones without utilizing special transforms, sound sources seem to be located inside our heads. The sound sources are said to be lateralized to one side or the other to varying degree. This internal lateralization is different than sound source localization in the natural environment in which the sound is localized distal to the head.
View Article and Find Full Text PDFBrain regions involved with processing dynamic visuomotor representational transformation are investigated using fMRI. The perceptual-motor task involved flying (or observing) a plane through a simulated Red Bull Air Race course in first person and third person chase perspective. The third person perspective is akin to remote operation of a vehicle.
View Article and Find Full Text PDFArticulatory goals have long been proposed to mediate perception. Examples include direct realist and constructivist (analysis by synthesis) theories of speech perception. Although the activity in brain regions involved with action production has been shown to be present during action observation (Mirror Neuron System), the relationship of this activity to perceptual performance has not been clearly demonstrated at the event level.
View Article and Find Full Text PDFWe focused on brain areas activated by audiovisual stimuli related to swallowing motions. In this study, three kinds of stimuli related to human swallowing movement (auditory stimuli alone, visual stimuli alone, or audiovisual stimuli) were presented to the subjects, and activated brain areas were measured using fMRI and analyzed. When auditory stimuli alone were presented, the supplementary motor area was activated.
View Article and Find Full Text PDFHum Brain Mapp
September 2009
Neural correlates of driving and of decision making have been investigated separately, but little is known about the underlying neural mechanisms of decision making in driving. Previous research discusses two types of decision making: reward-weighted decision making and cost-weighted decision making. There are many reward-weighted decision making neuroimaging studies but there are few cost-weighted studies.
View Article and Find Full Text PDFNeural processes underlying identification of durational contrasts were studied by comparing English and Japanese speakers for Japanese short/long vowel identification relative to consonant identification. Enhanced activities for non-native contrast (Japanese short/long vowel identification by English speakers) were observed in brain regions involved with articulatory-auditory mapping (Broca's area, superior temporal gyrus, planum temporale, and cerebellum), but not in the supramarginal gyrus. Greater activity in the supramarginal gyrus found for the consonant identification over short/long vowel identification by Japanese speakers implies that it is more important for phonetic contrasts differing in place of articulation than for vowel duration.
View Article and Find Full Text PDFThis 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs.
View Article and Find Full Text PDFLeft fusiform gyrus and left angular gyrus are considered to be respectively involved with visual form processing and associating visual and auditory (phonological) information in reading. However, there are a number of studies that fail to show the contribution of these regions in carrying out these aspects of reading. Considerable differences in the type of stimuli and tasks used in the various studies may account for the discrepancy in results.
View Article and Find Full Text PDFThis experiment investigates neural processes underlying perceptual identification of the same phonemes for native- and second-language speakers. A model is proposed implicating the use of articulatory-auditory and articulatory-orosensory mappings to facilitate perceptual identification under conditions in which the phonetic contrast is ambiguous, as in the case of second-language speakers. In contrast, native-language speakers are predicted to use auditory-based phonetic representations to a greater extent for perceptual identification than second-language speakers.
View Article and Find Full Text PDFPerception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites.
View Article and Find Full Text PDFThis fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site.
View Article and Find Full Text PDFAdult native Japanese speakers have difficulty perceiving the English /r-l/ phonetic contrast even after years of exposure. However, after extensive perceptual identification training, long-lasting improvement in identification performance can be attained. This fMRI study investigates localized changes in brain activity associated with 1 month of extensive feedback-based perceptual identification training by native Japanese speakers learning the English /r-l/ phonetic contrast.
View Article and Find Full Text PDF