Recent research has shown that the internal dynamics of an artificial neural network model of sentence comprehension displayed a similar pattern to the amplitude of the N400 in several conditions known to modulate this event-related potential. These results led Rabovsky et al. (2018) to suggest that the N400 might reflect change in an implicit predictive representation of meaning corresponding to semantic prediction error.
View Article and Find Full Text PDFFinding the structure of a sentence-the way its words hold together to convey meaning-is a fundamental step in language comprehension. Several brain regions, including the left inferior frontal gyrus, the left posterior superior temporal gyrus, and the left anterior temporal pole, are supposed to support this operation. The exact role of these areas is nonetheless still debated.
View Article and Find Full Text PDFLanguage comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time.
View Article and Find Full Text PDFEmbodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models.
View Article and Find Full Text PDF