Barsalou (1999) proposes that conceptual knowledge is represented by mental simulations containing perceptual information derived from actual experiences. Although a substantial number of studies have provided evidence consistent with this view in native language comprehension, it remains unclear whether the non-native language comprehension processes also include mental simulations. The current study successfully replicates the shape match effect in sentence-picture verification (Zwaan et al.
View Article and Find Full Text PDFAccounts of human language comprehension propose different mathematical relationships between the contextual probability of a word and how difficult it is to process, including linear, logarithmic, and super-logarithmic ones. However, the empirical evidence favoring any of these over the others is mixed, appearing to vary depending on the index of processing difficulty used and the approach taken to calculate contextual probability. To help disentangle these results, we focus on the mathematical relationship between corpus-derived contextual probability and the N400, a neural index of processing difficulty.
View Article and Find Full Text PDFA sound evaluation of the cadmium (Cd) mass balance in agricultural soils needs accurate data of Cd leaching. Reported Cd concentrations from in situ studies are often one order of magnitude lower than predicted by empirical models, which were calibrated to pore water data from stored soils. It is hypothesized that this discrepancy is related to the preferential flow of water (non-equilibrium) and/or artefacts caused by drying and rewetting soils prior to pore water analysis.
View Article and Find Full Text PDFTheoretical accounts of the N400 are divided as to whether the amplitude of the N400 response to a stimulus reflects the extent to which the stimulus was predicted, the extent to which the stimulus is semantically similar to its preceding context, or both. We use state-of-the-art machine learning tools to investigate which of these three accounts is best supported by the evidence. GPT-3, a neural language model trained to compute the conditional probability of any word based on the words that precede it, was used to operationalize contextual predictability.
View Article and Find Full Text PDFAlthough identifying the referents of single words is often cited as a key challenge for getting word learning off the ground, it overlooks the fact that young learners consistently encounter words in the context of other words. How does this company help or hinder word learning? Prior investigations into early word learning from children's real-world language input have yielded conflicting results, with some influential findings suggesting an advantage for words that keep a diverse company of other words, and others suggesting the opposite. Here, we sought to triangulate the source of this conflict, comparing different measures of diversity and approaches to controlling for correlated effects of word frequency across multiple languages.
View Article and Find Full Text PDF