Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1167/17.12.5 | DOI Listing |
J Vis
April 2024
Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology, Delft, Netherlands.
Humans can rapidly identify materials, such as wood or leather, even within a complex visual scene. Given a single image, one can easily identify the underlying "stuff," even though a given material can have highly variable appearance; fabric comes in unlimited variations of shape, pattern, color, and smoothness, yet we have little trouble categorizing it as fabric. What visual cues do we use to determine material identity? Prior research suggests that simple "texture" features of an image, such as the power spectrum, capture information about material properties and identity.
View Article and Find Full Text PDFJ Vis
December 2023
Psychology Department, Barnard College, Columbia University, New York, NY, USA.
Material depictions in artwork are useful tools for revealing image features that support material categorization. For example, artistic recipes for drawing specific materials make explicit the critical information leading to recognizable material properties (Di Cicco, Wjintjes, & Pont, 2020) and investigating the recognizability of material renderings as a function of their visual features supports conclusions about the vocabulary of material perception. Here, we examined how the recognition of materials from photographs and drawings was affected by the application of the Portilla-Simoncelli texture synthesis model.
View Article and Find Full Text PDFNeural Netw
November 2023
The University of Electro-Communications, Chofu, Tokyo, Japan. Electronic address:
It is well-understood that the performance of Deep Convolutional Neural Networks (DCNNs) in image recognition tasks is influenced not only by shape but also by texture information. Despite this, understanding the internal representations of DCNNs remains a challenging task. This study employs a simplified version of the Portilla-Simoncelli Statistics, termed "minPS," to explore how texture information is represented in a pre-trained VGG network.
View Article and Find Full Text PDFJ Neurosci
May 2023
Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213.
Midlevel features, such as contour and texture, provide a computational link between low- and high-level visual representations. Although the nature of midlevel representations in the brain is not fully understood, past work has suggested a texture statistics model, called the P-S model (Portilla and Simoncelli, 2000), is a candidate for predicting neural responses in areas V1-V4 as well as human behavioral data. However, it is not currently known how well this model accounts for the responses of higher visual cortex to natural scene images.
View Article and Find Full Text PDFSci Rep
April 2023
Department of Life Sciences, The University of Tokyo, Tokyo, Japan.
Natural surfaces such as soil, grass, and skin usually involve far more complex and heterogenous structures than the perfectly uniform surfaces assumed in studies on color and material perception. Despite this, we can easily perceive the representative color of these surfaces. Here, we investigated the visual mechanisms underlying the perception of representative surface color using 120 natural images of diverse materials and their statistically synthesized images.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!