Purpose: Patient-to-image registration is a preliminary step required in surgical navigation based on preoperative images. Human intervention and fiducial markers hamper this task as they are time-consuming and introduce potential errors. We aimed to develop a fully automatic 2D registration system for augmented reality in ear surgery.

Methods: CT-scans and corresponding oto-endoscopic videos were collected from 41 patients (58 ears) undergoing ear examination (vestibular schwannoma before surgery, profound hearing loss requiring cochlear implant, suspicion of perilymphatic fistula, contralateral ears in cases of unilateral chronic otitis media). Two to four images were selected from each case. For the training phase, data from patients (75% of the dataset) and 11 cadaveric specimens were used. Tympanic membranes and malleus handles were contoured on both video images and CT-scans by expert surgeons. The algorithm used a U-Net network for detecting the contours of the tympanic membrane and the malleus on both preoperative CT-scans and endoscopic video frames. Then, contours were processed and registered through an iterative closest point algorithm. Validation was performed on 4 cases and testing on 6 cases. Registration error was measured by overlaying both images and measuring the average and Hausdorff distances.

Results: The proposed registration method yielded a precision compatible with ear surgery with a 2D mean overlay error of mm for the incus and mm for the round window. The average Hausdorff distance for these 2 targets was mm and mm respectively. An outlier case with higher errors (2.3 mm and 1.5 mm average Hausdorff distance for incus and round window respectively) was observed in relation to a high discrepancy between the projection angle of the reconstructed CT-scan and the video image. The maximum duration for the overall process was 18 s.

Conclusions: A fully automatic 2D registration method based on a convolutional neural network and applied to ear surgery was developed. The method did not rely on any external fiducial markers nor human intervention for landmark recognition. The method was fast and its precision was compatible with ear surgery.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00405-023-08403-0DOI Listing

Publication Analysis

Top Keywords

average hausdorff
12
ear surgery
12
based convolutional
8
convolutional neural
8
human intervention
8
fiducial markers
8
fully automatic
8
automatic registration
8
registration method
8
precision compatible
8

Similar Publications

This study aimed to develop and evaluate an efficient method to automatically segment T1- and T2-weighted brain magnetic resonance imaging (MRI) images. We specifically compared the segmentation performance of individual convolutional neural network (CNN) models against an ensemble approach to advance the accuracy of MRI-guided radiotherapy (RT) planning..

View Article and Find Full Text PDF

Multi-label segmentation of carpal bones in MRI using expansion transfer learning.

Phys Med Biol

January 2025

Department of Trauma and Reconstructive Surgery, BG Hospital Bergmanntrost, Merseburger Straße 165 06112 Halle, Halle, Sachsen-Anhalt, 06112, GERMANY.

The purpose of this study was to develop a robust deep learning approach trained with a small in-vivo MRI dataset for multi-label segmentation of all eight carpal bones for therapy planning and wrist dynamic analysis. Approach: A small dataset of 15 3.0-T MRI scans from five health subjects was employed within this study.

View Article and Find Full Text PDF

Artificial Intelligence (AI) based auto-segmentation has demonstrated numerous benefits to clinical radiotherapy workflows. However, the rapidly changing regulatory, research, and market environment presents challenges around selecting and evaluating the most suitable solution. To support the clinical adoption of AI auto-segmentation systems, Selection Criteria recommendations were developed to enable a holistic evaluation of vendors, considering not only raw performance but associated risks uniquely related to the clinical deployment of AI.

View Article and Find Full Text PDF

Objectives: To investigate the performance of a deep learning (DL) model for segmenting cone-beam computed tomography (CBCT) scans taken before and after mandibular horizontal guided bone regeneration (GBR) to evaluate hard tissue changes.

Materials And Methods: The proposed SegResNet-based DL model was trained on 70 CBCT scans. It was tested on 10 pairs of pre- and post-operative CBCT scans of patients who underwent mandibular horizontal GBR.

View Article and Find Full Text PDF

Automatic 4D mitral valve segmentation from transesophageal echocardiography: a semi-supervised learning approach.

Med Biol Eng Comput

January 2025

Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.

Performing automatic and standardized 4D TEE segmentation and mitral valve analysis is challenging due to the limitations of echocardiography and the scarcity of manually annotated 4D images. This work proposes a semi-supervised training strategy using pseudo labelling for MV segmentation in 4D TEE; it employs a Teacher-Student framework to ensure reliable pseudo-label generation. 120 4D TEE recordings from 60 candidates for MV repair are used.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!