Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels.
View Article and Find Full Text PDFUnsupervised domain adaptation is a popular method in medical image analysis, but it can be tricky to make it work: without labels to link the domains, domains must be matched using feature distributions. If there is no additional information, this often leaves a choice between multiple possibilities to map the data that may be equally likely but not equally correct. In this paper we explore the fundamental problems that may arise in unsupervised domain adaptation, and discuss conditions that might still make it work.
View Article and Find Full Text PDFObjectives: To evaluate changes in diaphragmatic function in Pompe disease using MRI over time, both during natural disease course and during treatment with enzyme replacement therapy (ERT).
Methods: In this prospective study, 30 adult Pompe patients and 10 healthy controls underwent pulmonary function tests and spirometry-controlled MRI twice, with an interval of 1 year. In the sagittal view of 3D gradient echo breath-hold acquisitions, diaphragmatic motion (cranial-caudal ratio between end-inspiration and end-expiration) and curvature (diaphragm height and area ratio) were calculated using a machine learning algorithm based on convolutional neural networks.
The aim of this exploratory study was to evaluate diaphragmatic function across various neuromuscular diseases using spirometry-controlled MRI. We measured motion of the diaphragm relative to that of the thoracic wall (cranial-caudal ratio vs. anterior posterior ratio; CC-AP ratio), and changes in the diaphragmatic curvature (diaphragm height and area ratio) during inspiration in 12 adults with a neuromuscular disease having signs of respiratory muscle weakness, 18 healthy controls, and 35 adult Pompe patients - a group with prominent diaphragmatic weakness.
View Article and Find Full Text PDFConditional Random Fields (CRFs) are often used to improve the output of an initial segmentation model, such as a convolutional neural network (CNN). Conventional CRF approaches in medical imaging use manually defined features, such as intensity to improve appearance similarity or location to improve spatial coherence. These features work well for some tasks, but can fail for others.
View Article and Find Full Text PDFPurpose: To develop and evaluate a fully-automated deep learning-based method for assessment of intracranial internal carotid artery calcification (ICAC).
Materials And Methods: This was a secondary analysis of prospectively collected data from the Rotterdam study (2003-2006) to develop and validate a deep learning-based method for automated ICAC delineation and volume measurement. Two observers manually delineated ICAC on noncontrast CT scans of 2319 participants (mean age, 69 years ± 7 [standard deviation]; 1154 women [53.
Orphanet J Rare Dis
January 2021
Background: In Pompe disease, an inherited metabolic muscle disorder, severe diaphragmatic weakness often occurs. Enzyme replacement treatment is relatively ineffective for respiratory function, possibly because of irreversible damage to the diaphragm early in the disease course. Mildly impaired diaphragmatic function may not be recognized by spirometry, which is commonly used to study respiratory function.
View Article and Find Full Text PDFFinding automatically multiple lesions in large images is a common problem in medical image analysis. Solving this problem can be challenging if, during optimization, the automated method cannot access information about the location of the lesions nor is given single examples of the lesions. We propose a new weakly supervised detection method using neural networks, that computes attention maps revealing the locations of brain lesions.
View Article and Find Full Text PDFIEEE Trans Med Imaging
February 2019
Machine learning algorithms can have difficulties adapting to data from different sources, for example from different imaging modalities. We present and analyze three techniques for unsupervised cross-modality feature learning, using a shared autoencoder-like convolutional network that learns a common representation from multi-modal data. We investigate a form of feature normalization, a learning objective that minimizes cross-modality differences, and modality dropout, in which the network is trained with varying subsets of modalities.
View Article and Find Full Text PDFThe choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may outperform these standard filter banks because they learn a feature description directly from the training data.
View Article and Find Full Text PDF