Publications by authors named "Z KOSTIC"

Interacting with data visualizations without an instrument or touch surface is typically characterized by the use of mid-air hand gestures. While mid-air expressions can be quite intuitive for interacting with digital content at a distance, they frequently lack precision and necessitate a different way of expressing users' data-related intentions. In this work, we aim to identify new designs for mid-air hand gesture manipulations that can facilitate instrument-free, touch-free, and embedded interactions with visualizations, while utilizing the three-dimensional (3D) interaction space that mid-air gestures afford.

View Article and Find Full Text PDF

Background: In the United States, over 12 000 home healthcare agencies annually serve 6+ million patients, mostly aged 65+ years with chronic conditions. One in three of these patients end up visiting emergency department (ED) or being hospitalized. Existing risk identification models based on electronic health record (EHR) data have suboptimal performance in detecting these high-risk patients.

View Article and Find Full Text PDF

Background: Automation of surgical phase recognition is a key effort toward the development of Computer Vision (CV) algorithms, for workflow optimization and video-based assessment. CV is a form of Artificial Intelligence (AI) that allows interpretation of images through a deep learning (DL)-based algorithm. The improvements in Graphic Processing Unit (GPU) computing devices allow researchers to apply these algorithms for recognition of content in videos in real-time.

View Article and Find Full Text PDF

Objectives: Patient-clinician communication provides valuable explicit and implicit information that may indicate adverse medical conditions and outcomes. However, practical and analytical approaches for audio-recording and analyzing this data stream remain underexplored. This study aimed to 1) analyze patients' and nurses' speech in audio-recorded verbal communication, and 2) develop machine learning (ML) classifiers to effectively differentiate between patient and nurse language.

View Article and Find Full Text PDF

Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, defined rigorous ground-truth annotation rules, then pre-processed and annotated the videos. We deployed seven deep learning models to establish the baseline accuracy for surgical phase recognition and explored four advanced architectures.

View Article and Find Full Text PDF