Deep learning (DL) models for medical image classification frequently struggle to generalize to data from outside institutions. Additional clinical data are also rarely collected to comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center model to identify the lung sliding artifact on lung ultrasound (LUS), we pursued a validation strategy using external LUS data.
View Article and Find Full Text PDFSelf-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data. This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging, concentrating on studies that compare self-supervised pretraining to fully supervised learning for diagnostic tasks such as classification and segmentation. The most pertinent finding is that self-supervised pretraining generally improves downstream task performance compared to full supervision, most prominently when unlabelled examples greatly outnumber labelled examples.
View Article and Find Full Text PDFObjectives: To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients.
Design: Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device.
Background: Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency.
Methods: We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset.
Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding.
View Article and Find Full Text PDFLung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts.
View Article and Find Full Text PDFObjectives: Lung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.
Design: A convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies.
Purpose: In the context of analyzing neck vascular morphology, this work formulates and compares Mask R-CNN and U-Net-based algorithms to automatically segment the carotid artery (CA) and internal jugular vein (IJV) from transverse neck ultrasound (US).
Methods: US scans of the neck vasculature were collected to produce a dataset of 2439 images and their respective manual segmentations. Fourfold cross-validation was employed to train and evaluate Mask RCNN and U-Net models.
The authors present a deep learning algorithm for the automatic centroid localisation of out-of-plane US needle reflections to produce a semi-automatic ultrasound (US) probe calibration algorithm. A convolutional neural network was trained on a dataset of 3825 images at a 6 cm imaging depth to predict the position of the centroid of a needle reflection. Applying the automatic centroid localisation algorithm to a test set of 614 annotated images produced a root mean squared error of 0.
View Article and Find Full Text PDF