Lexical co-occurrence models of semantic memory form representations of the meaning of a word on the basis of the number of times that pairs of words occur near one another in a large body of text. These models offer a distinct advantage over models that require the collection of a large number of judgments from human subjects, since the construction of the representations can be completely automated. Unfortunately, word frequency, a well-known predictor of reaction time in several cognitive tasks, has a strong effect on the co-occurrence counts in a corpus. Two words with high frequency are more likely to occur together purely by chance than are two words that occur very infrequently. In this article, we examine a modification of a successful method for constructing semantic representations from lexical co-occurrence. We show that our new method eliminates the influence of frequency, while still capturing the semantic characteristics of words.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/brm.40.3.705 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!