Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display.

J Biomed Inform

Faculty of Computer Science and Biomedical Engineering, Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria.

Published: June 2015

The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2015.04.003DOI Listing

Publication Analysis

Top Keywords

surgical navigation
16
navigation system
12
head-mounted display
12
augmented reality
8
optical see-through
8
ar-based surgical
8
anatomical structures
8
real-world scenario
8
system
5
development surgical
4

Similar Publications

Wearable augmented reality in neurosurgery offers significant advantages by enabling the visualization of navigation information directly on the patient, seamlessly integrating virtual data with the real surgical field. This ergonomic approach can facilitate a more intuitive understanding of spatial relationships and guidance cues, potentially reducing cognitive load and enhancing the accuracy of surgical gestures by aligning critical information with the actual anatomy in real-time. This study evaluates the benefits of a novel AR platform, VOSTARS, by comparing its targeting accuracy to that of the gold-standard electromagnetic (EM) navigation system, Medtronic StealthStation S7.

View Article and Find Full Text PDF

Background: The appropriateness of ablation for liver cancer patients meeting the Milan criteria remains controversial.

Purpose: This study aims to evaluate the long-term outcomes of MR-guided thermal ablation for HCC patients meeting the Milan criteria and develop a nomogram for predicting survival rates.

Methods: A retrospective analysis was conducted from January 2009 to December 2021 at a single institution.

View Article and Find Full Text PDF

Background: High morbidity and mortality make pancreaticoduodenectomy (PD) one of the most complicated surgical procedures. This meta-analysis aimed to compare the outcomes of robotic pancreaticoduodenectomy (RPD) versus open pancreaticoduodenectomy (OPD).

Method: A comprehensive literature search of PubMed, Cochrane Central, and Google Scholar was conducted from inception to November 2024.

View Article and Find Full Text PDF

The impact of three-dimensional (3D) dose delivery accuracy of C-arm linacs on the planning target volume (PTV) margin was evaluated for non-coplanar intracranial stereotactic radiosurgery (SRS). A multi-institutional 3D starshot test using beams from seven directions was conducted at 22 clinics using Varian and Elekta linacs with X-ray CT-based polymer gel dosimeters. Variability in dose delivery accuracy was observed, with the distance between the imaging isocenter and each beam exceeding 1 mm at one institution for Varian and nine institutions for Elekta.

View Article and Find Full Text PDF

Adapting a style based generative adversarial network to create images depicting cleft lip deformity.

Sci Rep

January 2025

Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, C1-121, Al Gharrafa St, Ar Rayyan, Doha, Qatar.

Training a machine learning system to evaluate any type of facial deformity is impeded by the scarcity of large datasets of high-quality, ethics board-approved patient images. We have built a deep learning-based cleft lip generator called CleftGAN designed to produce an almost unlimited number of high-fidelity facsimiles of cleft lip facial images with wide variation. A transfer learning protocol testing different versions of StyleGAN as the base model was undertaken.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!