The Euclidean and MAX metrics have been widely used to model cue summation psychophysically and computationally. Both rules happen to be special cases of a more general Minkowski summation rule , where m = 2 and ∞, respectively. In vision research, Minkowski summation with power m = 3-4 has been shown to be a superior model of how subthreshold components sum to give an overall detection threshold. Recently, we have previously reported that Minkowski summation with power m = 2.84 accurately models summation of suprathreshold visual cues in photographs. In four suprathreshold discrimination experiments, we confirm the previous findings with new visual stimuli and extend the applicability of this rule to cue combination in auditory stimuli (musical sequences and phonetic utterances, where m = 2.95 and 2.54, respectively) and cross-modal stimuli (m = 2.56). In all cases, Minkowski summation with power m = 2.5-3 outperforms the Euclidean and MAX operator models. We propose that this reflects the summation of neuronal responses that are not entirely independent but which show some correlation in their magnitudes. Our findings are consistent with electrophysiological research that demonstrates signal correlations (r = 0.1-0.2) between sensory neurons when these are presented with natural stimuli.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3061140 | PMC |
http://dx.doi.org/10.1098/rspb.2010.1888 | DOI Listing |
J Vis
January 2019
Emmanuel College, University of Cambridge, Cambridge, UK.
We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters.
View Article and Find Full Text PDFJ Vis
January 2015
Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK.
We investigate whether a computational model of V1 can predict how observers rate perceptual differences between paired movie clips of natural scenes. Observers viewed 198 pairs of movies clips, rating how different the two clips appeared to them on a magnitude scale. Sixty-six of the movie pairs were naturalistic and those remaining were low-pass or high-pass spatially filtered versions of those originals.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
December 2014
The properties of the VC-dimension under various compositions are well-understood, but this is much less the case for classes of continuous functions. In this brief, we show that a commonly used scale-sensitive dimension, Vγ, is much less well-behaved under Minkowski summation than its VC cousin, while the fat-shattering dimension retains some compositional similarity to the VC-dimension. As an application, we analyze the fat-shattering dimension of trigonometric functions and series.
View Article and Find Full Text PDFSci Rep
March 2014
McGill Vision Research, Department of Ophthalmology H4.14, 687 Pine Ave West, Montreal, Quebec, H3A 1A1 Canada.
We measure the orientation tuning of red-green colour and luminance vision at low (0.375 c/deg) and mid (1.5 c/deg) spatial frequencies using the low-contrast psychophysical method of subthreshold summation.
View Article and Find Full Text PDFIEEE Trans Syst Man Cybern B Cybern
April 2012
School of Computer Engineering, Nanyang Technological University, Singapore 639798.
We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!