In the present study, we used eye-tracking to investigate formality-register and morphosyntactic congruence during sentence reading. While research frequently covers participants' processing of lexical, (morpho-)syntactic, or semantic knowledge (e.g.
View Article and Find Full Text PDFFront Psychol
September 2023
Events are not isolated but rather linked to one another in various dimensions. In language processing, various sources of information-including real-world knowledge, (representations of) current linguistic input and non-linguistic visual context-help establish causal connections between events. In this review, we discuss causal inference in relation to events and event knowledge as one aspect of world knowledge, and their representations in language comprehension.
View Article and Find Full Text PDFIn the present review paper by members of the collaborative research center "Register: Language Users' Knowledge of Situational-Functional Variation" (CRC 1412), we assess the pervasiveness of register phenomena across different time periods, languages, modalities, and cultures. We define "register" as recurring variation in language use depending on the function of language and on the social situation. Informed by rich data, we aim to better understand and model the knowledge involved in situation- and function-based use of language register.
View Article and Find Full Text PDFBehav Res Methods
October 2023
In this paper, we discuss key characteristics and typical experimental designs of the visual-world paradigm and compare different methods of analysing eye-movement data. We discuss the nature of the eye-movement data from a visual-world study and provide data analysis tutorials on ANOVA, t-tests, linear mixed-effects model, growth curve analysis, cluster-based permutation analysis, bootstrapped differences of timeseries, generalised additive modelling, and divergence point analysis to enable psycholinguists to apply each analytical method to their own data. We discuss advantages and disadvantages of each method and offer recommendations about how to select an appropriate method depending on the research question and the experimental design.
View Article and Find Full Text PDFFront Psychol
October 2021
In interpreting spoken sentences in event contexts, comprehenders both integrate their current interpretation of language with the recent past (e.g., events they have witnessed) and develop expectations about future event possibilities.
View Article and Find Full Text PDFResearch findings on language comprehension suggest that many kinds of non-linguistic cues can rapidly affect language processing. Extant processing accounts of situated language comprehension model these rapid effects and are only beginning to accommodate the role of non-linguistic emotional, cues. To begin with a detailed characterization of distinct cues and their relative effects, three visual-world eye-tracking experiments assessed the relative importance of two cue types (action depictions vs.
View Article and Find Full Text PDFAbundant empirical evidence suggests that visual perception and motor responses are involved in language comprehension ('grounding'). However, when modeling the grounding of sentence comprehension on a word-by-word basis, linguistic representations and cognitive processes are rarely made fully explicit. This article reviews representational formalisms and associated (computational) models with a view to accommodating incremental and compositional grounding effects.
View Article and Find Full Text PDFWhen a word is used metaphorically (for example "walrus" in the sentence "The president is a walrus"), some features of that word's meaning ("very fat," "slow-moving") are carried across to the metaphoric interpretation while other features ("has large tusks," "lives near the north pole") are not. What happens to these features that relate only to the literal meaning during processing of novel metaphors? In four experiments, the present study examined the role of the feature of physical containment during processing of verbs of physical containment. That feature is used metaphorically to signify difficulty, such as "fenced in" in the sentence "the journalist's opinion was fenced in after the change in regime.
View Article and Find Full Text PDFAge has been shown to influence language comprehension, with delays, for instance, in older adults' expectations about upcoming information. We examined to what extent expectations about upcoming event information (who-does-what-to-whom) change across the lifespan (in 4- to 5-year-old children, younger, and older adults) and as a function of different world-language relations. In a visual-world paradigm, participants in all three age groups inspected a speaker whose facial expression was either smiling or sad.
View Article and Find Full Text PDFActa Psychol (Amst)
September 2019
When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g.
View Article and Find Full Text PDFLanguage and vision interact in non-trivial ways. Linguistically, spatial utterances are often asymmetrical as they relate more stable objects (reference objects) to less stable objects (located objects). Researchers have claimed that such linguistic asymmetry should also be reflected in the allocation of visual attention when people process a depicted spatial relation described by spatial language.
View Article and Find Full Text PDFThe present work is a description and an assessment of a methodology designed to quantify different aspects of the interaction between language processing and the perception of the visual world. The recording of eye-gaze patterns has provided good evidence for the contribution of both the visual context and linguistic/world knowledge to language comprehension. Initial research assessed object-context effects to test theories of modularity in language processing.
View Article and Find Full Text PDFExisting evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as 'similarity is closeness' would, for instance, involve cards moving closer together and the sentence relates similarity between abstract concepts such as war and battle. However, other studies have reported a disadvantage (or interference) for congruence between the semantic content of a sentence and representations of spatial distance derived from this sort of non-linguistic context.
View Article and Find Full Text PDFLanguage-processing accounts are beginning to accommodate different visual context effects, but they remain underspecified regarding differences between cues, both during sentence comprehension and subsequent recall. We monitored participants' eye movements to mentioned characters while they listened to transitive sentences. We varied whether speaker gaze, a depicted action, neither, or both of these visual cues were available, as well as whether both cues were deictic (Experiment 1) or only speaker gaze (Experiment 2).
View Article and Find Full Text PDFMore and more findings suggest a tight temporal coupling between (non-linguistic) socially interpreted context and language processing. Still, real-time language processing accounts remain largely elusive with respect to the influence of biological (e.g.
View Article and Find Full Text PDFBuilding on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g.
View Article and Find Full Text PDFSpatial terms such as "above", "in front of", and "on the left of" are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as "The box is above the sausage".
View Article and Find Full Text PDFExtant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene-sentence mismatch based on actions versus thematic role relations, e.g., (Altmann & Kamide, 2007; Knoeferle & Crocker, 2007; Taylor & Zwaan, 2008; Zwaan & Radvansky, 1998)).
View Article and Find Full Text PDFA large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.
View Article and Find Full Text PDFWe report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events.
View Article and Find Full Text PDFDuring comprehension, a listener can rapidly follow a frontally seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two "visual-world" eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted) actions between two of three Second Life characters on a computer screen.
View Article and Find Full Text PDFEye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn't yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn't) modulated by how often people see recent and future events acted out.
View Article and Find Full Text PDF