Publications by authors named "Sivalogeswaran Ratnasingam"

Languages vary in their number of color terms. A widely accepted theory proposes that languages evolve, acquiring color terms in a stereotyped sequence. This theory, by Berlin and Kay (BK), is supported by analyzing best exemplars ("focal colors") of basic color terms in the World Color Survey (WCS) of 110 languages.

View Article and Find Full Text PDF

We hypothesized that the parts of scenes identified by human observers as "objects" show distinct color properties from backgrounds, and that the brain uses this information towards object recognition. To test this hypothesis, we examined the color statistics of naturally and artificially colored objects and backgrounds in a database of over 20,000 images annotated with object labels. Objects tended to be warmer colored (L-cone response > M-cone response) and more saturated compared to backgrounds.

View Article and Find Full Text PDF

What determines how languages categorize colors? We analyzed results of the World Color Survey (WCS) of 110 languages to show that despite gross differences across languages, communication of chromatic chips is always better for warm colors (yellows/reds) than cool colors (blues/greens). We present an analysis of color statistics in a large databank of natural images curated by human observers for salient objects and show that objects tend to have warm rather than cool colors. These results suggest that the cross-linguistic similarity in color-naming efficiency reflects colors of universal usefulness and provide an account of a principle (color use) that governs how color categories come about.

View Article and Find Full Text PDF

The perceived color of a uniform image patch depends not only on the spectral content of the light that reaches the eye but also on its context. One of the most extensively studied forms of context dependence is a simultaneous contrast display: a center-surround display containing a homogeneous target embedded in a homogenous surround. A number of models have been proposed to account for the chromatic transformations of targets induced by such surrounds, but they were typically derived in the restricted context of experiments using achromatic targets with surrounds that varied along the cardinal axes of color space.

View Article and Find Full Text PDF

Chromatictarget patches embedded in a chromatically variegated surround appear less saturated than when they are embedded in an achromatic uniform surround (Brown & MacLeod, 1997), which can be construed as either a form of gamut expansion for targets on uniform surrounds or as a form of gamut compression for targets on variegated surrounds.Ekroll, Faul, and Niederée (2004) suggested that the difference in perceived chromaticity on the two surrounds is caused by a layered scene decomposition, wherein the increased saturation of targets on homogenous surrounds is attributed to a decomposition of a target patch into a chromatically saturated transparent layer overlying an achromatic background.Here, we report asymmetric matching data that show the perceived chromaticity difference observed on the two surrounds depends on the particular direction of chromatic variation applied to the variegated surround.

View Article and Find Full Text PDF

In this paper an algorithm is proposed to extract two illuminant invariant chromaticity features from three image sensor responses. The algorithm extracts these chromaticity features at pixel level and therefore can perform well in scenes illuminated with non-uniform illuminant. An approach is proposed to use the algorithm with cameras of unknown sensitivity.

View Article and Find Full Text PDF

In this paper, an algorithm is proposed to estimate the spectral power distribution of a light source at a pixel. The first step of the algorithm is forming a two-dimensional illuminant invariant chromaticity space. In estimating the illuminant spectrum, generalized inverse estimation and Wiener estimation methods were applied.

View Article and Find Full Text PDF

In this paper, the results of an investigation of the possibility of extending "color constancy" to obtain illuminant-invariant reflectance features from data in the near-ultraviolet (UV) and near-infrared (IR) wavelength regions are reported. These features are obtained by extending a blackbody-model-based color constancy algorithm proposed by Ratnasingam and Collins [J. Opt.

View Article and Find Full Text PDF

The apparent color of an object within a scene depends on the spectrum of the light illuminating the object. However, recording an object's color independent of the illuminant spectrum is important in many machine vision applications. In this paper the performance of a blackbody-model-based color constancy algorithm that requires four sensors with different spectral responses is investigated under daylight illumination.

View Article and Find Full Text PDF

An algorithm is described to extract two features that represent the chromaticity of a surface and that are independent of both the intensity and correlated color temperature of the daylight illuminating a scene. For mathematical convenience this algorithm is derived using the assumptions that each photodetector responds to a single wavelength and that the spectrum of the illumination source can be represented by a blackbody spectrum. Neither of these assumptions will be valid in a real application.

View Article and Find Full Text PDF