Structural and view-specific representations for the categorization of three-dimensional objects.

Vision Res

Institute of Medical Psychology, University of Munich, Goethestrasse 31, 80336 München, Germany.

Published: November 2008

AI Article Synopsis

  • The text discusses the ongoing debate about whether object recognition relies more on structural representations (like the 3D shapes of objects) or view-specific representations (how objects look from different angles).
  • Researchers used a combination of priming and supervised category learning to investigate this topic.
  • Results suggest that while structural representations can be learned under certain conditions, if there's not enough prior knowledge or image input, the brain tends to rely on view-specific representations.

Article Abstract

It has been debated whether object recognition depends on structural or view-specific representations. This issue is revisited here using a paradigm of priming, supervised category learning, and generalization to novel viewpoints. Results show that structural representations can be learned for three-dimensional (3D) objects lacking generalized-cone components (geons). Metric relations between object parts are distinctive features under such conditions. Representations preserving 3D structure are learned provided prior knowledge of object shape and sufficient image input information is available; otherwise view-specific representations are generated. These findings indicate that structural and view-specific representations are related through shifts of representation induced by learning.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.visres.2008.08.009DOI Listing

Publication Analysis

Top Keywords

view-specific representations
16
structural view-specific
12
three-dimensional objects
8
representations
6
structural
4
representations categorization
4
categorization three-dimensional
4
objects debated
4
debated object
4
object recognition
4

Similar Publications

AMCFCN: attentive multi-view contrastive fusion clustering net.

PeerJ Comput Sci

March 2024

College of Electronic and Information Engineering, Wuyi University, Jiangmen, Guangdong, China.

Advances in deep learning have propelled the evolution of multi-view clustering techniques, which strive to obtain a view-common representation from multi-view datasets. However, the contemporary multi-view clustering community confronts two prominent challenges. One is that view-specific representations lack guarantees to reduce noise introduction, and another is that the fusion process compromises view-specific representations, resulting in the inability to capture efficient information from multi-view data.

View Article and Find Full Text PDF

View symmetry has been suggested to be an important intermediate representation between view-specific and view-invariant representations of faces in the human brain. Here, we compared view-symmetry in humans and a deep convolutional neural network (DCNN) trained to recognise faces. First, we compared the output of the DCNN to head rotations in yaw (left-right), pitch (up-down) and roll (in-plane rotation).

View Article and Find Full Text PDF

Multi-View Integrative Approach For Imputing Short-Chain Fatty Acids and Identifying Key factors predicting Blood SCFA.

bioRxiv

September 2024

Tulane Center for Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University, New Orleans, LA, USA.

Article Synopsis
  • Short-chain fatty acids (SCFAs) are key metabolites created by gut bacteria from dietary fiber, influencing overall body health but often studied with incomplete data due to research limitations.
  • A new method called MAE (Multi-task Multi-View Attentive Encoders) has been developed to better predict blood SCFA levels by analyzing gut microbiome data alongside dietary and host characteristics.
  • Tests on data from 964 and 171 subjects showed that MAE significantly outperforms older methods in predicting SCFAs and highlights the important roles of gut bacteria, diet, and individual traits in SCFA production.
View Article and Find Full Text PDF

Cross-view discrepancy-dependency network for volumetric medical image segmentation.

Med Image Anal

January 2025

School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, Guangdong, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China. Electronic address:

The limited data poses a crucial challenge for deep learning-based volumetric medical image segmentation, and many methods have tried to represent the volume by its subvolumes (i.e., multi-view slices) for alleviating this issue.

View Article and Find Full Text PDF

Deep multiview clustering provides an efficient way to analyze the data consisting of multiple modalities and features. Recently, the autoencoder (AE)-based deep multiview clustering algorithms have attracted intensive attention by virtue of their rewarding capabilities of extracting inherent features. Nevertheless, most existing methods are still confronted by several problems.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!