4 results match your criteria: "National Institute of Information and Communications Technology Kyoto[Affiliation]"
Front Hum Neurosci
May 2016
Center of Excellence in Neuroergonomics, Technology, and Cognition, George Mason University Fairfax, VA, USA.
The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition.
View Article and Find Full Text PDFFront Syst Neurosci
March 2015
Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan.
Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation.
View Article and Find Full Text PDFFront Neurosci
September 2014
Laurier Centre for Cognitive Neuroscience and Department of Psychology, Wilfrid Laurier University Waterloo, ON, Canada.
Brain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds /r/ and /l/ that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity.
View Article and Find Full Text PDFFront Psychol
May 2014
Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan.
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action ("Mirror System" properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures.
View Article and Find Full Text PDF