Purpose: Surgical robots effectively improve the accuracy and safety of surgical procedures. Current optical-navigated oral surgical robots are typically developed based on binocular vision positioning systems, which are susceptible to factors including obscured visibility, limited workplace, and ambient light interference. Hence, the purpose of this study was to develop a lightweight robotic platform based on monocular vision for oral surgery that enhances the precision and efficiency of surgical procedures.
Methods: A monocular optical positioning system (MOPS) was applied to oral surgical robots, and a semi-autonomous robotic platform was developed utilizing monocular vision. A series of vitro experiments were designed to simulate dental implant procedures to evaluate the performance of optical positioning systems and assess the robotic system accuracy. The singular configuration detection and avoidance test, the collision detection and processing test, and the drilling test under slight movement were conducted to validate the safety of the robotic system.
Results: The position error and rotation error of MOPS were 0.0906 ± 0.0762 mm and 0.0158 ± 0.0069 degrees, respectively. The attitude angle of robotic arms calculated by the forward and inverse solutions was accurate. Additionally, the robot's surgical calibration point exhibited an average error of 0.42 mm, with a maximum error of 0.57 mm. Meanwhile, the robot system was capable of effectively avoiding singularities and demonstrating robust safety measures in the presence of minor patient movements and collisions during vitro experiment procedures.
Conclusion: The results of this in vitro study demonstrate that the accuracy of MOPS meets clinical requirements, making it a promising alternative in the field of oral surgical robots. Further studies will be planned to make the monocular vision oral robot suitable for clinical application.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11548-024-03161-8 | DOI Listing |
Invest Ophthalmol Vis Sci
January 2025
Department of Surgical Sciences, Eye Clinic Section, University of Turin, Turin, Italy.
Purpose: This study aimed to comprehensively assess visual performance in eyes with idiopathic epiretinal membrane (iERM). Additionally, it sought to explore the associations between optical coherence tomography (OCT) imaging biomarkers and visual performance in patients with iERM.
Methods: In this prospective, non-interventional study, 57 participants with treatment-naïve iERM from the University of Turin, between September 2023 and March 2024 were enrolled.
Sensors (Basel)
December 2024
Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.
View Article and Find Full Text PDFOtolaryngol Head Neck Surg
January 2025
Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington, USA.
Objective: To validate the use of neural radiance fields (NeRF), a state-of-the-art computer vision technique, for rapid, high-fidelity 3-dimensional (3D) reconstruction in endoscopic sinus surgery (ESS).
Study Design: An experimental cadaveric pilot study.
Setting: Academic medical center.
Behav Res Methods
January 2025
CIMeC, Center for Mind/Brain Sciences, The University of Trento, Trento, Italy.
Sighting dominance is an important behavioral property which has been difficult to measure quantitatively with high precision. We developed a measurement method that is grounded in a two-camera model that satisfies these aims. Using a simple alignment task, this method quantifies sighting ocular dominance during binocular viewing, identifying each eye's relative contribution to binocular vision.
View Article and Find Full Text PDFPsych J
January 2025
Department of Psychology, Suzhou University of Science and Technology, Suzhou, China.
Visual attention is intrinsically rhythmic and oscillates based on the discrete sampling of either single or multiple objects. Recently, studies have found that the early visual cortex (V1/V2) modulates attentional rhythms. Both monocular and binocular cells are present in the early visual cortex, which acts as a transfer station for transformation of the monocular visual pathway into the binocular visual pathway.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!