Publications by authors named "A P Krugliak"

Objects that are congruent with a scene are recognised more efficiently than objects that are incongruent. Further, semantic integration of incongruent objects elicits a stronger N300/N400 EEG component. Yet, the time course and mechanisms of how contextual information supports access to semantic object information is unclear.

View Article and Find Full Text PDF

We have a great capacity to remember a large number of items, yet memory is selective. While multiple factors dictate why we remember some things and not others, it is increasingly acknowledged that some objects are more memorable than others. Recent studies show semantically distinctive objects are better remembered, as are objects located in expected scene contexts.

View Article and Find Full Text PDF

The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment.

View Article and Find Full Text PDF

Our visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment. As a result, we have a limited understanding of neurocognitive processes beyond the controlled lab environment. Here, we aim to study neural processes in real-world environments, while also maintaining a degree of control over perception.

View Article and Find Full Text PDF

The orientation of a visual grating can be decoded from human primary visual cortex (V1) using functional magnetic resonance imaging (fMRI) at conventional resolutions (2-3 mm voxel width, 3T scanner). It is unclear to what extent this information originates from different spatial scales of neuronal selectivity, ranging from orientation columns to global areal maps. According to the global-areal-map account, fMRI orientation decoding relies exclusively on fMRI voxels in V1 exhibiting a radial or vertical preference.

View Article and Find Full Text PDF