An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received. Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6764208PMC
http://dx.doi.org/10.1523/JNEUROSCI.0675-19.2019DOI Listing

Publication Analysis

Top Keywords

listening versus
12
versus reading
12
spoken written
12
semantic
9
representation semantic
8
semantic human
8
human cerebral
8
cerebral cortex
8
meaning spoken
8
relationship brain
8

Similar Publications

Acoustic-phonetic perception refers to the ability to perceive and discriminate between speech sounds. Acquired impairment of acoustic-phonetic perception is known historically as "pure word deafness" and typically follows bilateral lesions of the cortical auditory system. The extent to which this deficit occurs after unilateral left hemisphere damage and the critical left hemisphere areas involved are not well defined.

View Article and Find Full Text PDF

Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.

View Article and Find Full Text PDF

Introduction: In recent years, podcasts have been increasingly deployed in medical education. However, studies often fail to evaluate the learning outcomes from these podcasts effectively. The aim of this study was to determine whether the active production of podcasts enhances students' knowledge compared to the passive consumption of student-produced podcasts, as it increases the engagement with the learning content through active learning.

View Article and Find Full Text PDF

Background: Fragile X syndrome (FXS) is a leading known genetic cause of intellectual disability and autism spectrum disorders (ASD)-associated behaviors. A consistent and debilitating phenotype of FXS is auditory hypersensitivity that may lead to delayed language and high anxiety. Consistent with findings in FXS human studies, the mouse model of FXS, the Fmr1 knock out (KO) mouse, shows auditory hypersensitivity and temporal processing deficits.

View Article and Find Full Text PDF

Background: Cochlear implantation is an effective method of auditory rehabilitation. Nevertheless, the results show individual variations depending on several factors.

Aim: To evaluate cochlear implantation results based on the APCEI profile (Acceptance, Perception, Comprehension, Oral Expression and Intelligibility) and audiometric results.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!