Publications by authors named "Gerry T M Altmann"

In a series of sentence-picture verification studies we contrasted, for example, "… choose the balloon with "… inflate the balloon" and "… the inflated balloon" to examine the degree to which different representational components of event representation (specifically, the different object states entailed by the inflating event; minimally, the balloon in its uninflated and inflated states) are jointly activated after state-change verbs and past participles derived from them. Experiments 1 and 2 showed that the initial and end states are both activated after state-change verbs, but that the initial state is considerably less accessible after participles. Experiment 3 showed that intensifier adverbs (e.

View Article and Find Full Text PDF

Historically, the development of valid and reliable methods for assessing higher-order cognitive abilities (e.g., rule learning and transfer) has been difficult in rodent models.

View Article and Find Full Text PDF

Online research methods have the potential to facilitate equitable accessibility to otherwise-expensive research resources, as well as to more diverse populations and language combinations than currently populate our studies. In psycholinguistics specifically, webcam-based eye tracking is emerging as a powerful online tool capable of capturing sentence processing effects in real time. The present paper asks whether webcam-based eye tracking provides the necessary granularity to replicate effects-crucially both large and small-that tracker-based eye tracking has shown.

View Article and Find Full Text PDF

Context is critical for conceptual processing, but the mechanism underpinning its encoding and reinstantiation during abstract concept processing is unclear. Context may be especially important for abstract concepts-we investigated whether episodic context is recruited differently when processing abstract compared with concrete concepts. Experiments 1 and 2 presented abstract and concrete words in arbitrary contexts at encoding (Experiment 1: red/green colored frames; Experiment 2: male/female voices).

View Article and Find Full Text PDF

Under a theory of event representations that defines events as dynamic changes in objects across both time and space, as in the proposal of Intersecting Object Histories (Altmann & Ekves, 2019), the encoding of changes in state is a fundamental first step in building richer representations of events. In other words, there is an inherent dynamic that is captured by our knowledge of events. In the present study, we evaluated the degree to which this dynamic was inferable from just the linguistic signal, without access to visual, sensory, and embodied experience, using recurrent neural networks (RNNs).

View Article and Find Full Text PDF

We link cleansing effects to contemporary cognitive theories via an account of event representation (intersecting object histories) that provides an explicit, neurally plausible mechanism for encoding objects (e.g., the self) and their associations (with other entities) across time.

View Article and Find Full Text PDF

Understanding the time-course of event knowledge activation is crucial for theories of language comprehension. We report two experiments using the 'visual world paradigm' (VWP) that investigated the dynamic mapping between object-state representations and real-time language processing. In Experiment 1, participants heard sentences that described events resulting in either a substantial change of state (e.

View Article and Find Full Text PDF

Gilead et al.'s approach to human cognition places abstraction and prediction at the heart of "mental travel" under a "representational diversity" perspective that embraces foundational concepts in cognitive science. But, it gives insufficient credit to the possibility that the process of abstraction produces a gradient, and underestimates the importance of a highly influential domain in predictive cognition: language, and related, the emergence of experientially based structure through time.

View Article and Find Full Text PDF

concepts differ from concrete concepts in several ways. Here, we focus on what we refer to as : The objects and relations that constitute an abstract concept (e.g.

View Article and Find Full Text PDF

To understand language people form mental representations of described situations. Linguistic cues are known to influence these representations. In the present study, participants were asked to verify whether the object presented in a picture was mentioned in the preceding words.

View Article and Find Full Text PDF

We offer a new account of event representation based on those aspects of object representation that encode an object's , and which convey the distinct states that an object has experienced across time-minimally reflecting the and of whatever changes the object undergoes as an event unfolds. Our intention is to account for the of event representations. For an event that can be described as the event is defined by the changes in state and location, across time, of the onion, the chef, and any instruments that (might have) mediated the interaction between the chef and the onion.

View Article and Find Full Text PDF

How are relationships between concepts affected by the interplay between short-term contextual constraints and long-term conceptual knowledge? Across two studies we investigate the consequence of changes in visual context for the dynamics of conceptual processing. Participants' eye movements were tracked as they viewed a visual depiction of e.g.

View Article and Find Full Text PDF

Statistical approaches to emergent knowledge have tended to focus on the process by which experience of individual episodes accumulates into generalizable experience across episodes. However, there is a seemingly opposite, but equally critical, process that such experience affords: the process by which, from a space of types (e.g.

View Article and Find Full Text PDF

So-called "looks-at-nothing" have previously been used to show that recalling what also elicits the recall of where this was. Here, we present evidence from an eye-tracking study which shows that disrupting looks to "there" does not disrupt recalling what was there, nor do (anticipatory) looks to "there" facilitate recalling what was there. Therefore, our results suggest that recalling where does not recall what.

View Article and Find Full Text PDF
Article Synopsis
  • Successful language comprehension involves matching words to their meanings in the real world, but ambiguity makes this difficult.
  • The text examines two scenarios where a single word can be ambiguous due to differing states of an object: one involving two states of the same object, and the other involving two distinct objects.
  • fMRI research shows that the left posterior ventrolateral prefrontal cortex (pVLPFC) activates when dealing with conflicts from ambiguous states, confirming that ambiguity is more problematic when the states can't coexist, as seen with same-token discourses.
View Article and Find Full Text PDF

We investigated the retrieval of location information, and the deployment of attention to these locations, following (described) event-related location changes. In two visual world experiments, listeners viewed arrays with containers like a bowl, jar, pan, and jug, while hearing sentences like "The boy will pour the sweetcorn from the bowl into the jar, and he will pour the gravy from the pan into the jug. And then, he will taste the sweetcorn".

View Article and Find Full Text PDF

Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects.

View Article and Find Full Text PDF

Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene.

View Article and Find Full Text PDF

When an object is described as changing state during an event, do the representations of those states compete? The distinct states they represent cannot coexist at any one moment in time, yet each representation must be retrievable at the cost of suppressing the other possible object states. We used functional magnetic resonance imaging of human participants to test whether such competition does occur, and whether this competition between object states recruits brain areas sensitive to other forms of conflict. In Experiment 1, the same object was changed either substantially or minimally by one of two actions.

View Article and Find Full Text PDF

Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object.

View Article and Find Full Text PDF

An auditory sentence comprehension task investigated the extent to which the integration of contextual and structural cues was mediated by verbal memory span with 32 English-speaking six- to eight-year-old children. Spoken relative clause sentences were accompanied by visual context pictures which fully (depicting the actions described within the relative clause) or partially (depicting several referents) met the pragmatic assumptions of relativization. Comprehension of the main and relative clauses of centre-embedded and right-branching structures was compared for each context.

View Article and Find Full Text PDF

The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man…' or 'the girl…', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated.

View Article and Find Full Text PDF

Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g.

View Article and Find Full Text PDF