Publications by authors named "Liangqiong Qu"

Artificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns.

View Article and Find Full Text PDF

Global investigation of medulloblastoma has been hindered by the widespread inaccessibility of molecular subgroup testing and paucity of data. To bridge this gap, we established an international molecularly characterized database encompassing 934 medulloblastoma patients from thirteen centers across China and the United States. We demonstrate how image-based machine learning strategies have the potential to create an alternative pathway for non-invasive, presurgical, and low-cost molecular subgroup prediction in the clinical management of medulloblastoma.

View Article and Find Full Text PDF

Brain magnetic resonance imaging (MRI) provides detailed soft tissue contrasts that are critical for disease diagnosis and neuroscience research. Higher MRI resolution typically comes at the cost of signal-to-noise ratio (SNR) and tissue contrast, particularly for more common 3 Tesla (3T) MRI scanners. At ultra-high magnetic field strength, 7 Tesla (7T) MRI allows for higher resolution with greater tissue contrast and SNR.

View Article and Find Full Text PDF

Purpose: To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging.

Materials And Methods: In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient.

View Article and Find Full Text PDF

The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis.

View Article and Find Full Text PDF
Article Synopsis
  • * Five AI techniques were tested, including familiar ones like U-Net and GAN, as well as newer transformer-based models, using data from two different universities with a large number of PET/MRI scans.
  • * Results showed that the SwinIR model performed the best in restoring low-count images, achieving high structural similarity scores, especially at higher dose levels, indicating promise for enhancing diagnostic quality in medical imaging.
View Article and Find Full Text PDF

Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution. Despite recent progress, there remain fundamental challenges such as the lack of convergence and the potential for catastrophic forgetting across real-world heterogeneous devices. In this paper, we demonstrate that self-attention-based architectures (e.

View Article and Find Full Text PDF

Federated learning is an emerging research paradigm for enabling collaboratively training deep learning models without sharing patient data. However, the data from different institutions are usually heterogeneous across institutions, which may reduce the performance of models trained using federated learning. In this study, we propose a novel heterogeneity-aware federated learning method, SplitAVG, to overcome the performance drops from data heterogeneity in federated learning.

View Article and Find Full Text PDF

Light scattering by biological tissues sets a limit to the penetration depth of high-resolution optical microscopy imaging of live mammals in vivo. An effective approach to reduce light scattering and increase imaging depth is to extend the excitation and emission wavelengths to the second near-infrared window (NIR-II) at >1,000 nm, also called the short-wavelength infrared window. Here we show biocompatible core-shell lead sulfide/cadmium sulfide quantum dots emitting at ~1,880 nm and superconducting nanowire single-photon detectors for single-photon detection up to 2,000 nm, enabling a one-photon excitation fluorescence imaging window in the 1,700-2,000 nm (NIR-IIc) range with 1,650 nm excitation-the longest one-photon excitation and emission for in vivo mouse imaging so far.

View Article and Find Full Text PDF

Collaborative learning, which enables collaborative and decentralized training of deep neural networks at multiple institutions in a privacy-preserving manner, is rapidly emerging as a valuable technique in healthcare applications. However, its distributed nature often leads to significant heterogeneity in data distributions across institutions. In this paper, we present a novel generative replay strategy to address the challenge of data heterogeneity in collaborative learning methods.

View Article and Find Full Text PDF

In vivo fluorescence/luminescence imaging in the near-infrared-IIb (NIR-IIb, 1,500 to 1,700 nm) window under <1,000 nm excitation can afford subcentimeter imaging depth without any tissue autofluorescence, promising high-precision intraoperative navigation in the clinic. Here, we developed a compact imager for concurrent visible photographic and NIR-II (1,000 to 3,000 nm) fluorescence imaging for preclinical image-guided surgery. Biocompatible erbium-based rare-earth nanoparticles (ErNPs) with bright down-conversion luminescence in the NIR-IIb window were conjugated to TRC105 antibody for molecular imaging of CD105 angiogenesis markers in 4T1 murine breast tumors.

View Article and Find Full Text PDF

Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images.

View Article and Find Full Text PDF

Segmenting breast tumors from dynamic contrast-enhanced magnetic resonance (DCE-MR) images is a critical step for early detection and diagnosis of breast cancer. However, variable shapes and sizes of breast tumors, as well as inhomogeneous background, make it challenging to accurately segment tumors in DCE-MR images. Therefore, in this article, we propose a novel tumor-sensitive synthesis module and demonstrate its usage after being integrated with tumor segmentation.

View Article and Find Full Text PDF

Noninvasive optical imaging with deep tissue penetration depth and high spatiotemporal resolution is important to longitudinally studying the biology at the single-cell level in live mammals, but has been challenging due to light scattering. Here, we developed near-infrared II (NIR-II) (1,000 to 1,700 nm) structured-illumination light-sheet microscopy (NIR-II SIM) with ultralong excitation and emission wavelengths up to ∼1,540 and ∼1,700 nm, respectively, suppressing light scattering to afford large volumetric three-dimensional (3D) imaging of tissues with deep-axial penetration depths. Integrating structured illumination into NIR-II light-sheet microscopy further diminished background and improved spatial resolution by approximately twofold.

View Article and Find Full Text PDF

Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance.

View Article and Find Full Text PDF

Background: A generative adversarial network could be used for high-resolution (HR) medical image synthesis with reduced scan time.

Purpose: To evaluate the potential of using a deep convolutional generative adversarial network (DCGAN) for generating HR and HR images based on their corresponding low-resolution (LR) images (LR and LR ).

Study Type: This was a retrospective analysis of a prospectively acquired cohort.

View Article and Find Full Text PDF

Ultra-high field 7T magnetic resonance imaging (MRI) scanners produce images with exceptional anatomical details, which can facilitate diagnosis and prognosis. However, 7T MRI scanners are often cost prohibitive and hence inaccessible. In this paper, we propose a novel wavelet-based semi-supervised adversarial learning framework to synthesize 7T MR images from their 3T counterparts.

View Article and Find Full Text PDF

Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details.

View Article and Find Full Text PDF

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer.

View Article and Find Full Text PDF

Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner.

View Article and Find Full Text PDF

We propose a novel dual-domain convolutional neural network framework to improve structural information of routine 3 T images. We introduce a parameter-efficient butterfly network that involves two complementary domains: a spatial domain and a frequency domain. The butterfly network allows the interaction of these two domains in learning the complex mapping from 3 T to 7 T images.

View Article and Find Full Text PDF

Non-invasive deep-tissue three-dimensional optical imaging of live mammals with high spatiotemporal resolution is challenging owing to light scattering. We developed near-infrared II (1,000-1,700 nm) light-sheet microscopy with excitation and emission of up to approximately 1,320 nm and 1,700 nm, respectively, for optical sectioning at a penetration depth of approximately 750 μm through live tissues without invasive surgery and at a depth of approximately 2 mm in glycerol-cleared brain tissues. Near-infrared II light-sheet microscopy in normal and oblique configurations enabled in vivo imaging of live mice through intact tissue, revealing abnormal blood flow and T-cell motion in tumor microcirculation and mapping out programmed-death ligand 1 and programmed cell death protein 1 in tumors with cellular resolution.

View Article and Find Full Text PDF

Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection.

View Article and Find Full Text PDF

Extracting or separating intrinsic information and illumination from natural images is crucial for better solving computer vision tasks. In this paper, we present a new illumination-based color space, the IL (intrinsic information and lighting level) space. Its first two channels represent 2D intrinsic information, and the third channel is for lighting levels.

View Article and Find Full Text PDF

In this paper, we propose a novel, effective and fast method to obtain a color illumination invariant and shadow-free image from a single outdoor image. Different from state-of-the-art methods for shadow-free image that either need shadow detection or statistical learning, we set up a linear equation set for each pixel value vector based on physically-based shadow invariants, deduce a pixel-wise orthogonal decomposition for its solutions, and then get an illumination invariant vector for each pixel value vector on an image. The illumination invariant vector is the unique particular solution of the linear equation set, which is orthogonal to its free solutions.

View Article and Find Full Text PDF