Publications by authors named "Mattias P Heinrich"

Purpose: This study aims to address the challenging estimation of trajectories from freehand ultrasound examinations by means of registration of automatically generated surface points. Current approaches to inter-sweep point cloud registration can be improved by incorporating heatmap predictions, but practical challenges such as label-sparsity or only partially overlapping coverage of target structures arise when applying realistic examination conditions.

Methods: We propose a pipeline comprising three stages: (1) Utilizing a Free Point Transformer for coarse pre-registration, (2) Introducing HeatReg for further refinement using support point clouds, and (3) Employing instance optimization to enhance predicted displacements.

View Article and Find Full Text PDF

Purpose: Lung fissure segmentation on CT images often relies on 3D convolutional neural networks (CNNs). However, 3D-CNNs are inefficient for detecting thin structures like the fissures, which make up a tiny fraction of the entire image volume. We propose to make lung fissure segmentation more efficient by using geometric deep learning (GDL) on sparse point clouds.

View Article and Find Full Text PDF

Purpose: Accurate detection of central venous catheter (CVC) misplacement is crucial for patient safety and effective treatment. Existing artificial intelligence (AI) often grapple with the limitations of label inaccuracies and output interpretations that lack clinician-friendly comprehensibility. This study aims to introduce an approach that employs segmentation of support material and anatomy to enhance the precision and comprehensibility of CVC misplacement detection.

View Article and Find Full Text PDF

Registration of medical image data requires methods that can align anatomical structures precisely while applying smooth and plausible transformations. Ideally, these methods should furthermore operate quickly and apply to a wide variety of tasks. Deep learning-based image registration methods usually entail an elaborate learning procedure with the need for extensive training data.

View Article and Find Full Text PDF

In cardiac cine imaging, acquiring high-quality data is challenging and time-consuming due to the artifacts generated by the heart's continuous movement. Volumetric, fully isotropic data acquisition with high temporal resolution is, to date, intractable due to MR physics constraints. To assess whole-heart movement under minimal acquisition time, we propose a deep learning model that reconstructs the volumetric shape of multiple cardiac chambers from a limited number of input slices while simultaneously optimizing the slice acquisition orientation for this task.

View Article and Find Full Text PDF

3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.

View Article and Find Full Text PDF

Image registration for temporal ultrasound sequences can be very beneficial for image-guided diagnostics and interventions. Cooperative human-machine systems that enable seamless assistance for both inexperienced and expert users during ultrasound examinations rely on robust, realtime motion estimation. Yet rapid and irregular motion patterns, varying image contrast and domain shifts in imaging devices pose a severe challenge to conventional realtime registration approaches.

View Article and Find Full Text PDF
Article Synopsis
  • Domain Adaptation (DA) is gaining traction in medical imaging, particularly for image segmentation, but most techniques have been tested on limited datasets, often focusing on single-class challenges.
  • The Cross-Modality Domain Adaptation (crossMoDA) challenge, part of the MICCAI 2021 conference, introduced the first comprehensive benchmark for unsupervised cross-modality DA aimed at segmenting brain structures critical for vestibular schwannoma treatment planning.
  • Using a dataset of 105 annotated contrast-enhanced T1 (ceT) MR scans and non-annotated high-resolution T2 (hrT) scans, the challenge involved 55 teams globally, achieving impressive segmentation accuracy closely resembling fully supervised methods.
View Article and Find Full Text PDF

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches.

View Article and Find Full Text PDF

Unlabelled: Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks.

View Article and Find Full Text PDF

Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations-including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses.

View Article and Find Full Text PDF

Background And Objective: Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction.

View Article and Find Full Text PDF

A major goal of lung cancer screening is to identify individuals with particular phenotypes that are associated with high risk of cancer. Identifying relevant phenotypes is complicated by the variation in body position and body composition. In the brain, standardized coordinate systems (e.

View Article and Find Full Text PDF

Deep vein thrombosis (DVT) is a blood clot most commonly found in the leg, which can lead to fatal pulmonary embolism (PE). Compression ultrasound of the legs is the diagnostic gold standard, leading to a definitive diagnosis. However, many patients with possible symptoms are not found to have a DVT, resulting in long referral waiting times for patients and a large clinical burden for specialists.

View Article and Find Full Text PDF

Purpose: Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data.

View Article and Find Full Text PDF

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks.

View Article and Find Full Text PDF

In the last two years learning-based methods have started to show encouraging results in different supervised and unsupervised medical image registration tasks. Deep neural networks enable (near) real time applications through fast inference times and have tremendous potential for increased registration accuracies by task-specific learning. However, estimation of large 3D deformations, for example present in inhale to exhale lung CT or interpatient abdominal MRI registration, is still a major challenge for the widely adopted U-Net-like network architectures.

View Article and Find Full Text PDF

Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation.

View Article and Find Full Text PDF

Purpose: Nonlinear multimodal image registration, for example, the fusion of computed tomography (CT) and magnetic resonance imaging (MRI), fundamentally depends on a definition of image similarity. Previous methods that derived modality-invariant representations focused on either global statistical grayscale relations or local structural similarity, both of which are prone to local optima. In contrast to most learning-based methods that rely on strong supervision of aligned multimodal image pairs, we aim to overcome this limitation for further practical use cases.

View Article and Find Full Text PDF

PURPOSE : Despite its potential for improvements through supervision, deep learning-based registration approaches are difficult to train for large deformations in 3D scans due to excessive memory requirements. METHODS : We propose a new 2.5D convolutional transformer architecture that enables us to learn a memory-efficient weakly supervised deep learning model for multi-modal image registration.

View Article and Find Full Text PDF

Optical coherence tomography (OCT) enables the non-invasive acquisition of high-resolution three-dimensional cross-sectional images at micrometer scale and is mainly used in the field of ophthalmology for diagnosis as well as monitoring of eye diseases. Also in other areas, such as dermatology, OCT is already well established. Due to its non-invasive nature, OCT is also employed for research studies involving animal models.

View Article and Find Full Text PDF

Knowledge of whole heart anatomy is a prerequisite for many clinical applications. Whole heart segmentation (WHS), which delineates substructures of the heart, can be very valuable for modeling and analysis of the anatomy and functions of the heart. However, automating this segmentation can be challenging due to the large variation of the heart shape, and different image qualities of the clinical data.

View Article and Find Full Text PDF

In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection.

View Article and Find Full Text PDF

Purpose: For many years, deep convolutional neural networks have achieved state-of-the-art results on a wide variety of computer vision tasks. 3D human pose estimation makes no exception and results on public benchmarks are impressive. However, specialized domains, such as operating rooms, pose additional challenges.

View Article and Find Full Text PDF

Deep networks have set the state-of-the-art in most image analysis tasks by replacing handcrafted features with learned convolution filters within end-to-end trainable architectures. Still, the specifications of a convolutional network are subject to much manual design - the shape and size of the receptive field for convolutional operations is a very sensitive part that has to be tuned for different image analysis applications. 3D fully-convolutional multi-scale architectures with skip-connection that excel at semantic segmentation and landmark localisation have huge memory requirements and rely on large annotated datasets - an important limitation for wider adaptation in medical image analysis.

View Article and Find Full Text PDF