A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A comparative analysis of video-based surgical assessment: is evaluation of the entire critical portion of the operation necessary? | LitMetric

Background: Previous studies of video-based operative assessments using crowd sourcing have established the efficacy of non-expert evaluations. Our group sought to establish the equivalence of abbreviating video content for operative assessment.

Methods: A single institution video repository of six core general surgery operations was submitted for evaluation. Each core surgery included three unique surgical performances, totaling 18 unique operative videos. Each video was edited using four different protocols based on the critical portion of the operation: (1) custom edited critical portion (2) condensed critical portion (3) first 20 s of every minute of the critical portion, and (4) first 10 s of every minute of the critical portion. In total, 72 individually edited operative videos were submitted to the C-SATS (Crowd-Sourced Assessment of Technical Skills) platform (C-SATS) for evaluation. Aggregate score for study protocol was compared using the Kruskal-Wallis test. A multivariable, multilevel mixed-effects model was constructed to predict total skill assessment scores.

Results: Median video lengths for each protocol were: custom, 6:20 (IQR 5:27-7:28); condensed, 10:35 (8:50-12:06); 10 s, 4:35 (2:11-6:09); and 20 s, 9:09 (4:20-12:14). There was no difference in aggregate median score among the four study protocols: custom, 15.7 (14.4-16.2); condensed, 15.8 (15.2-16.4); 10 s, 15.8 (15.3-16.1); 20 s, 16.0 (15.1-16.3); χ = 1.661, p = 0.65. Regression modeling demonstrated a significant, but minimal effect of the 10 s and 20 s editing protocols compared to the custom method on individual video score: condensed, + 0.33 (- 0.05-0.70), p = 0.09; 10 s, + 0.29 (0.04-0.55), p = 0.03; 20 s, + 0.40 (0.15-0.66), p = 0.002.

Conclusion: A standardized protocol for video editing abbreviated surgical performances yields reproducible assessment of surgical aptitude when assessed by non-experts.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00464-021-08945-6DOI Listing

Publication Analysis

Top Keywords

critical portion
24
portion operation
8
surgical performances
8
operative videos
8
minute critical
8
score study
8
critical
6
portion
6
video
6
20 s
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!