With the recent discovery of water-ice and lava tubes on the Moon and Mars along with the development of in-situ resource utilization (ISRU) technology, the recent planetary exploration has focused on rover (or lander)-based surface missions toward the base construction for long-term human exploration and habitation. However, a 3D terrain map, mostly based on orbiters' terrain images, has insufficient resolutions for construction purposes. In this regard, this paper introduces the visual simultaneous localization and mapping (SLAM)-based robotic mapping method employing a stereo camera system on a rover. In the method, S-PTAM is utilized as a base framework, with which the disparity map from the self-supervised deep learning is combined to enhance the mapping capabilities under homogeneous and unstructured environments of planetary terrains. The overall performance of the proposed method was evaluated in the emulated planetary terrain and validated with potential results.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8621460PMC
http://dx.doi.org/10.3390/s21227715DOI Listing

Publication Analysis

Top Keywords

slam-based robotic
8
robotic mapping
8
mapping method
8
visual slam-based
4
mapping
4
method
4
planetary
4
method planetary
4
planetary construction
4
construction discovery
4

Similar Publications

Brain-computer interface (BCI)-based robot combines BCI and robotics technology to realize the brain's intention to control the robot, which not only opens up a new way for the daily care of the disabled individuals, but also provides a new way of communication for normal people. However, the existing systems still have shortcomings in many aspects such as friendliness of human-computer interaction, and interaction efficient. This study developed a humanoid robot control system by integrating an augmented reality (AR)-based BCI with a simultaneous localization and mapping (SLAM)-based scheme for autonomous indoor navigation.

View Article and Find Full Text PDF

Adaptive FPGA-Based Accelerators for Human-Robot Interaction in Indoor Environments.

Sensors (Basel)

October 2024

Department of Electronics and Communications Engineering, B. V. Raju Institute of Technology, Medak, Narsapur 502313, Telangana, India.

This study addresses the challenges of human-robot interactions in real-time environments with adaptive field-programmable gate array (FPGA)-based accelerators. Predicting human posture in indoor environments in confined areas is a significant challenge for service robots. The proposed approach works on two levels: the estimation of human location and the robot's intention to serve based on the human's location at static and adaptive positions.

View Article and Find Full Text PDF

A bronchoscopic navigation method based on neural radiation fields.

Int J Comput Assist Radiol Surg

October 2024

State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Robot Sensing and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, China.

Purpose: We introduce a novel approach for bronchoscopic navigation that leverages neural radiance fields (NeRF) to passively locate the endoscope solely from bronchoscopic images. This approach aims to overcome the limitations and challenges of current bronchoscopic navigation tools that rely on external infrastructures or require active adjustment of the bronchoscope.

Methods: To address the challenges, we leverage NeRF for bronchoscopic navigation, enabling passive endoscope localization from bronchoscopic images.

View Article and Find Full Text PDF

Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot's position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera's view, this approach might result in decreased accuracy or tracking failures.

View Article and Find Full Text PDF

Environmental mapping and robot navigation are the basis for realizing robot automation in modern agricultural production. This study proposes a new autonomous mapping and navigation method for gardening scene robots. First, a new LiDAR slam-based semantic mapping algorithm is proposed to enable the robots to analyze structural information from point cloud images and generate roadmaps from them.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!