A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioral data. Empirical work on semantic processing has shown that emotion also plays an important role especially in abstract concepts; however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover.
View Article and Find Full Text PDFThe contents and structure of semantic memory have been the focus of much recent research, with major advances in the development of distributional models, which use word co-occurrence information as a window into the semantics of language. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely been brought together.
View Article and Find Full Text PDF