It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4749382 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0149058 | PLOS |
Sensors (Basel)
December 2024
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming significant computational resources. This study proposes a novel network, the Euclidean-geodesic network (EGNet), which uses point cloud-voxel-mesh data to characterize detail, contour, and geodesic features, respectively.
View Article and Find Full Text PDFEntropy (Basel)
December 2024
Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China.
Image segmentation is a crucial task in artificial intelligence fields such as computer vision and medical imaging. While convolutional neural networks (CNNs) have achieved notable success by learning representative features from large datasets, they often lack geometric priors and global object information, limiting their accuracy in complex scenarios. Variational methods like active contours provide geometric priors and theoretical interpretability but require manual initialization and are sensitive to hyper-parameters.
View Article and Find Full Text PDFSci Rep
January 2025
School of Geodesy and Geomatics, Wuhan University, Wuhan, 430079, China.
Sci Rep
January 2025
The University of New South Wales, Sydney, Australia.
Detection and teeth segmentation from X-rays, aiding healthcare professionals in accurately determining the shape and growth trends of teeth. However, small dataset sizes due to patient privacy, high noise, and blurred boundaries between periodontal tissue and teeth pose challenges to the models' transportability and generalizability, making them prone to overfitting. To address these issues, we propose a novel model, named Grouped Attention and Cross-Layer Fusion Network (GCNet).
View Article and Find Full Text PDFJ Stat Theory Pract
September 2024
Statistics Online Computational Resource, University of Michigan, 426 North Ingalls Str, Ann Arbor, Michigan 48109-2003.
In this paper, we propose a novel deep neural network (DNN) architecture with fractal structure and attention blocks. The new method is tested to identify and segment 2D and 3D brain tumor masks in normal and pathological neuroimaging data. To circumvent the problem of limited 3D volumetric datasets with raw and ground truth tumor masks, we utilized data augmentation using affine transformations to significantly expand the training data prior to estimating the network model parameters.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!