Action segmentation is a challenging task in high-level process analysis, typically performed on video or kinematic data obtained from various sensors. This work presents two contributions related to action segmentation on kinematic data. Firstly, we introduce two versions of Multi-Stage Temporal Convolutional Recurrent Networks (MS-TCRNet), specifically designed for kinematic data.
View Article and Find Full Text PDFInt J Comput Assist Radiol Surg
July 2023
Purpose: This research aims to facilitate the use of state-of-the-art computer vision algorithms for the automated training of surgeons and the analysis of surgical footage. By estimating 2D hand poses, we model the movement of the practitioner's hands, and their interaction with surgical instruments, to study their potential benefit for surgical training.
Methods: We leverage pre-trained models on a publicly available hands dataset to create our own in-house dataset of 100 open surgery simulation videos with 2D hand poses.
Int J Comput Assist Radiol Surg
August 2022
Purpose: The goal of this work is to use multi-camera video to classify open surgery tools as well as identify which tool is held in each hand. Multi-camera systems help prevent occlusions in open surgery video data. Furthermore, combining multiple views such as a top-view camera covering the full operative field and a close-up camera focusing on hand motion and anatomy may provide a more comprehensive view of the surgical workflow.
View Article and Find Full Text PDFInt J Comput Assist Radiol Surg
June 2022
Purpose: The use of motion sensors is emerging as a means for measuring surgical performance. Motion sensors are typically used for calculating performance metrics and assessing skill. The aim of this study was to identify surgical gestures and tools used during an open surgery suturing simulation based on motion sensor data.
View Article and Find Full Text PDFInt J Comput Assist Radiol Surg
March 2022
Purpose: The goal of this study was to develop a new reliable open surgery suturing simulation system for training medical students in situations where resources are limited or in the domestic setup. Namely, we developed an algorithm for tools and hands localization as well as identifying the interactions between them based on simple webcam video data, calculating motion metrics for assessment of surgical skill.
Methods: Twenty-five participants performed multiple suturing tasks using our simulator.