Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning.

Med Image Anal

Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, OX3 7DQ, United Kingdom.

Published: December 2023

In this work, we exploit multi-task learning to jointly predict the two decision-making processes of gaze movement and probe manipulation that an experienced sonographer would perform in routine obstetric scanning. A multimodal guidance framework, Multimodal-GuideNet, is proposed to detect the causal relationship between a real-world ultrasound video signal, synchronized gaze, and probe motion. The association between the multi-modality inputs is learned and shared through a modality-aware spatial graph that leverages useful cross-modal dependencies. By estimating the probability distribution of probe and gaze movements in real scans, the predicted guidance signals also allow inter- and intra-sonographer variations and avoid a fixed scanning path. We validate the new multi-modality approach on three types of obstetric scanning examinations, and the result consistently outperforms single-task learning under various guidance policies. To simulate sonographer's attention on multi-structure images, we also explore multi-step estimation in gaze guidance, and its visual results show that the prediction allows multiple gaze centers that are substantially aligned with underlying anatomical structures.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7615231PMC
http://dx.doi.org/10.1016/j.media.2023.102981DOI Listing

Publication Analysis

Top Keywords

multi-task learning
8
obstetric scanning
8
guidance
5
gaze
5
gaze-probe joint
4
joint guidance
4
guidance multi-task
4
learning obstetric
4
obstetric ultrasound
4
scanning
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!