Publications by authors named "Troscianko T"

Perception of scenes has typically been investigated by using static or simplified visual displays. How attention is used to perceive and evaluate dynamic, realistic scenes is more poorly understood, in part due to the problem of comparing eye fixations to moving stimuli across observers. When the task and stimulus is common across observers, consistent fixation location can indicate that that region has high goal-based relevance.

View Article and Find Full Text PDF

Clutter is something that is encountered in everyday life, from a messy desk to a crowded street. Such clutter may interfere with our ability to search for objects in such environments, like our car keys or the person we are trying to meet. A number of computational models of clutter have been proposed and shown to work well for artificial and other simplified scene search tasks.

View Article and Find Full Text PDF

Over the last decade, television screens and display monitors have increased in size considerably, but has this improved our televisual experience? Our working hypothesis was that the audiences adopt a general strategy that "bigger is better." However, as our visual perceptions do not tap directly into basic retinal image properties such as retinal image size (C. A.

View Article and Find Full Text PDF

Various visual functions decline in ageing and even more so in patients with Alzheimer's disease (AD). Here we investigated whether the complex visual processes involved in ignoring illumination-related variability (specifically, cast shadows) in visual scenes may also be compromised. Participants searched for a discrepant target among items which appeared as posts with shadows cast by light-from-above when upright, but as angled objects when inverted.

View Article and Find Full Text PDF

Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events.

View Article and Find Full Text PDF

We conducted suprathreshold discrimination experiments to compare how natural-scene information is processed in central and peripheral vision (16° eccentricity). Observers' ratings of the perceived magnitude of changes in naturalistic scenes were lower for peripheral than for foveal viewing, and peripheral orientation changes were rated less than peripheral colour changes. A V1-based Visual Difference Predictor model of the magnitudes of perceived foveal change was adapted to match the sinusoidal grating sensitivities of peripheral vision, but it could not explain why the ratings for changes in peripheral stimuli were so reduced.

View Article and Find Full Text PDF

Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous.

View Article and Find Full Text PDF

We measured the temporal relationship between eye movements and manual responses while experts and novices watched a videotaped football match. Observers used a joystick to continuously indicate the likelihood of an imminent goal. We measured correlations between manual responses and between-subjects variability in eye position.

View Article and Find Full Text PDF

The Euclidean and MAX metrics have been widely used to model cue summation psychophysically and computationally. Both rules happen to be special cases of a more general Minkowski summation rule , where m = 2 and ∞, respectively. In vision research, Minkowski summation with power m = 3-4 has been shown to be a superior model of how subthreshold components sum to give an overall detection threshold.

View Article and Find Full Text PDF

Simple everyday tasks, such as visual search, require a visual system that is sensitive to differences. Here we report how observers perceive changes in natural image stimuli, and what happens if objects change color, position, or identity-i.e.

View Article and Find Full Text PDF

We are studying how people perceive naturalistic suprathreshold changes in the colour, size, shape or location of items in images of natural scenes, using magnitude estimation ratings to characterise the sizes of the perceived changes in coloured photographs. We have implemented a computational model that tries to explain observers' ratings of these naturalistic differences between image pairs. We model the action-potential firing rates of millions of neurons, having linear and non-linear summation behaviour closely modelled on real VI neurons.

View Article and Find Full Text PDF

Despite embodying fundamentally different assumptions about attentional allocation, a wide range of popular models of attention include a max-of-outputs mechanism for selection. Within these models, attention is directed to the items with the most extreme-value along a perceptual dimension via, for example, a winner-take-all mechanism. From the detection theoretic approach, this MAX-observer can be optimal under specific situations, however in distracter heterogeneity manipulations or in natural visual scenes this is not always the case.

View Article and Find Full Text PDF

Deficits in inefficient visual search task performance in Alzheimer's disease (AD) have been linked both to a general depletion of attentional resources and to a specific difficulty in performing conjunction discriminations. It has been difficult to examine the latter proposal because the uniqueness of conjunction search as compared to other visual search tasks has remained a matter of debate. We explored both these claims by measuring pupil dilation, as a measure of resource application, while patients with AD performed a conjunction search task and two single-feature search tasks of similar difficulty in healthy individuals.

View Article and Find Full Text PDF

Differences in the processing mechanisms underlying visual feature and conjunction search are still under debate, one problem being a common emphasis on performance measures (speed and accuracy) which do not necessarily provide insights to the underlying processing principles. Here, eye movements and pupil dilation were used to investigate sampling strategy and processing load during performance of a conjunction and two feature-search tasks, with younger (18-27 years) and healthy older (61-83 years) age groups compared for evidence of differential age-related changes. The tasks involved equivalent processing time per item, were controlled in terms of target-distractor similarity, and did not allow perceptual grouping.

View Article and Find Full Text PDF

Shadows may be "discounted" in human visual perception because they do not provide stable, lighting-invariant, information about the properties of objects in the environment. Using visual search, R. A.

View Article and Find Full Text PDF

How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans.

View Article and Find Full Text PDF

Natural visual scenes are rich in information, and any neural system analysing them must piece together the many messages from large arrays of diverse feature detectors. It is known how threshold detection of compound visual stimuli (sinusoidal gratings) is determined by their components' thresholds. We investigate whether similar combination rules apply to the perception of the complex and suprathreshold visual elements in naturalistic visual images.

View Article and Find Full Text PDF

Accurate quality assessment of fused images, such as combined visible and infrared radiation images, has become increasingly important with the rise in the use of image fusion systems. We bring together three approaches, applying two objective tasks (local target analysis and global target location) to two scenarios, together with subjective quality ratings and three computational metrics. Contrast pyramid, shift-invariant discrete wavelet transform, and dual-tree complex wavelet transform fusion are applied, as well as levels of JPEG2000 compression.

View Article and Find Full Text PDF

The increased interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. Previous work has often relied upon subjective quality ratings combined with some form of computational metric analysis. However, we have shown in previous work that such methods do not correlate well with how people perform in actual tasks utilising fused images.

View Article and Find Full Text PDF

Weighted salience models are a popular framework for image-driven visual attentional processes. These models operate by: sampling the visual environment; calculating feature maps; combining them in a weighted sum and using this to determine where the eye will fixate next. We examine these stages in turn.

View Article and Find Full Text PDF

We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search.

View Article and Find Full Text PDF