3D object reconstruction: A comprehensive view-dependent dataset.

Data Brief

Institute of Robotics and Machine Intelligence, Poznan University of Technology, Pl. Marii Sklodowskiej-Curie 5, 60-965 Poznan, PL, Poland.

Published: August 2024

The dataset contains RGB, depth, segmentation images of the scenes and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Data were collected in the custom simulator that loads random graspable objects and random tables from the ShapeNet dataset. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. The second subset was created using Kinect Azure and a set of real objects located on the ArUco board that was used to estimate the camera pose.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11222922PMC
http://dx.doi.org/10.1016/j.dib.2024.110569DOI Listing

Publication Analysis

Top Keywords

camera pose
8
scene
5
object reconstruction
4
reconstruction comprehensive
4
comprehensive view-dependent
4
view-dependent dataset
4
dataset dataset
4
dataset rgb
4
rgb depth
4
depth segmentation
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!