Publications by authors named "Drukker L"

Objective: Portosystemic shunts in growth-restricted fetuses are more common than previously thought. We aimed to describe fetuses with growth restriction and transient oligohydramnios in which a congenital intrahepatic portosystemic shunt (CIPSS) was noted during follow-up.

Methods: This was a retrospective study of all fetuses diagnosed with growth restriction and transient oligohydramnios during a 5-year period in a large tertiary referral center.

View Article and Find Full Text PDF

During a fetal ultrasound scan, a sonographer will zoom in and zoom out as they attempt to get clearer images of the anatomical structures of interest. This paper explores how to use this zoom information which is an under-utilised piece of information that is extractable from fetal ultrasound images. We explore associating zooming patterns to specific structures.

View Article and Find Full Text PDF

Research Question: What is the relationship between sonographic diagnosis of isolated adenomyosis and placenta-associated obstetric outcomes?

Design: In this 12-year retrospective cohort study (2010-2022), patients presenting with adenomyosis-related symptoms were assessed via ultrasound. The study included 59 women diagnosed with adenomyosis and 62 controls, leading to 203 births (90 in the adenomyosis group and 113 in the control group). Patients with endometriosis, uterine fibroids and anomalies, and those using assisted reproductive technology were excluded.

View Article and Find Full Text PDF

Objective: To assess the capacity of fetal pancreatic size, before standard blood glucose testing for screening and diagnosis, to predict maternal gestational diabetes mellitus (GDM).

Methods: This was a retrospective cohort study of low-risk pregnant women recruited during routine second-trimester fetal anatomical screening at 20-25 weeks' gestation at two ultrasound units in Israel between 2017 and 2020. The predictive performance of fetal pancreatic circumference ≥ 80 and ≥ 90 centiles and glucose challenge test (GCT) was examined for the outcome of GDM.

View Article and Find Full Text PDF

Auditory and visual signals are two primary perception modalities that are usually present together and correlate with each other, not only in natural environments but also in clinical settings. However, audio-visual modelling in the latter case can be more challenging, due to the different sources of audio/video signals and the noise (both signal-level and semantic-level) in auditory signals-usually speech audio. In this study, we consider audio-visual modelling in a clinical setting, providing a solution to learn medical representations that benefit various clinical tasks, without relying on dense supervisory annotations from human experts for the model training.

View Article and Find Full Text PDF

Background: Deep learning (DL) is a new technology that can assist prenatal ultrasound (US) in the detection of congenital heart disease (CHD) at the prenatal stage. Hence, an economic-epidemiologic evaluation (aka Cost-Utility Analysis) is required to assist policymakers in deciding whether to adopt the new technology.

Methods: The incremental cost-utility ratios (CUR), of adding DL assisted ultrasound (DL-US) to the current provision of US plus pulse oximetry (POX), was calculated by building a spreadsheet model that integrated demographic, economic epidemiological, health service utilization, screening performance, survival and lifetime quality of life data based on the standard formula: US screening data were based on real-world operational routine reports (as opposed to research studies).

View Article and Find Full Text PDF

Objective: Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution represents only part of a human task, which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in freehand video guidance, select a standard plane from a video stream and perform biometry.

View Article and Find Full Text PDF

In this work, we exploit multi-task learning to jointly predict the two decision-making processes of gaze movement and probe manipulation that an experienced sonographer would perform in routine obstetric scanning. A multimodal guidance framework, Multimodal-GuideNet, is proposed to detect the causal relationship between a real-world ultrasound video signal, synchronized gaze, and probe motion. The association between the multi-modality inputs is learned and shared through a modality-aware spatial graph that leverages useful cross-modal dependencies.

View Article and Find Full Text PDF

In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy.

View Article and Find Full Text PDF

We present a method for classifying human skill at fetal ultrasound scanning from eye-tracking and pupillary data of sonographers. Human skill characterization for this clinical task typically creates groupings of clinician skills such as expert and beginner based on the number of years of professional experience; experts typically have more than 10 years and beginners between 0-5 years. In some cases, they also include trainees who are not yet fully-qualified professionals.

View Article and Find Full Text PDF

Self-supervised contrastive representation learning offers the advantage of learning meaningful visual representations from unlabeled medical datasets for transfer learning. However, applying current contrastive learning approaches to medical data without considering its domain-specific anatomical characteristics may lead to visual representations that are inconsistent in appearance and semantics. In this paper, we propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL), which incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner.

View Article and Find Full Text PDF

Ultrasound (US)-probe motion estimation is a fundamental problem in automated standard plane locating during obstetric US diagnosis. Most recent existing recent works employ deep neural network (DNN) to regress the probe motion. However, these deep regressionbased methods leverage the DNN to overfit on the specific training data, which is naturally lack of generalization ability for the clinical application.

View Article and Find Full Text PDF

Manual annotation of medical images is time consuming for clinical experts; therefore, reliable automatic segmentation would be the ideal way to handle large medical datasets. In this paper, we are interested in detection and segmentation of two fundamental measurements in the first trimester ultrasound (US) scan: Nuchal Translucency (NT) and Crown Rump Length (CRL). There can be a significant variation in the shape, location or size of the anatomical structures in the fetal US scans.

View Article and Find Full Text PDF

Fetal Standard Plane (SP) acquisition is a key step in ultrasound based assessment of fetal health. The task detects an ultrasound (US) image with predefined anatomy. However, it requires skill to acquire a good SP in practice, and trainees and occasional users of ultrasound devices can find this challenging.

View Article and Find Full Text PDF

Medical image captioning models generate text to describe the semantic contents of an image, aiding the non-experts in understanding and interpretation. We propose a weakly-supervised approach to improve the performance of image captioning models on small image-text datasets by leveraging a large anatomically-labelled image classification dataset. Our method generates pseudo-captions (weak labels) for caption-less but anatomically-labelled (class-labelled) images using an encoder-decoder sequence-to-sequence model.

View Article and Find Full Text PDF

We present a method for skill characterisation of sonographer gaze patterns while performing routine second trimester fetal anatomy ultrasound scans. The position and scale of fetal anatomical planes during each scan differ because of fetal position, movements and sonographer skill. A standardised reference is required to compare recorded eye-tracking data for skill characterisation.

View Article and Find Full Text PDF

Video quality assurance is an important topic in obstetric ultrasound imaging to ensure that captured videos are suitable for biometry and fetal health assessment. Previously, one successful objective approach to automated ultrasound image quality assurance has considered it as a supervised learning task of detecting anatomical structures defined by a clinical protocol. In this paper, we propose an alternative and purely data-driven approach that makes effective use of both spatial and temporal information and the model learns from high-quality videos without any anatomy-specific annotations.

View Article and Find Full Text PDF

Eye trackers can provide visual guidance to sonographers during ultrasound (US) scanning. Such guidance is potentially valuable for less experienced operators to improve their scanning skills on how to manipulate the probe to achieve the desired plane. In this paper, a multimodal guidance approach (Multimodal-GuideNet) is proposed to capture the stepwise dependency between a real-world US video signal, synchronized gaze, and probe motion within a unified framework.

View Article and Find Full Text PDF

Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific.

View Article and Find Full Text PDF

This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures.

View Article and Find Full Text PDF

In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation.

View Article and Find Full Text PDF

Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning.

View Article and Find Full Text PDF

In this work, we present a novel gaze-assisted natural language processing (NLP)-based video captioning model to describe routine second-trimester fetal ultrasound scan videos in a vocabulary of spoken sonography. The primary novelty of our multi-modal approach is that the learned video captioning model is built using a combination of ultrasound video, tracked gaze and textual transcriptions from speech recordings. The textual captions that describe the spatio-temporal scan video content are learnt from sonographer speech recordings.

View Article and Find Full Text PDF

Objective: Despite decades of obstetric scanning, the field of sonographer workflow remains largely unexplored. In the second trimester, sonographers use scan guidelines to guide their acquisition of standard planes and structures; however, the scan-acquisition order is not prescribed. Using deep-learning-based video analysis, the aim of this study was to develop a deeper understanding of the clinical workflow undertaken by sonographers during second-trimester anomaly scans.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Warning

Message: fopen(/var/lib/php/sessions/ci_sessionce69d9d2452fa96ue18h2f8m5bjrlv34): Failed to open stream: No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 177

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: session_start(): Failed to read session data: user (path: /var/lib/php/sessions)

Filename: Session/Session.php

Line Number: 137

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once