A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioral data. Empirical work on semantic processing has shown that emotion also plays an important role especially in abstract concepts; however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover.
View Article and Find Full Text PDFThe contents and structure of semantic memory have been the focus of much recent research, with major advances in the development of distributional models, which use word co-occurrence information as a window into the semantics of language. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely been brought together.
View Article and Find Full Text PDFPhilos Trans R Soc Lond B Biol Sci
August 2018
Some explanations of abstract word learning suggest that these words are learnt primarily from the linguistic input, using statistical co-occurrences of words in language, whereas concrete words can also rely on non-linguistic, experiential information. According to this hypothesis, we expect that, if the learner is not able to fully exploit the information in the linguistic input, abstract words should be affected more than concrete ones. Embodied approaches instead argue that both abstract and concrete words can rely on experiential information and, therefore, there might not be any linguistic primacy.
View Article and Find Full Text PDF