Background: Multiparametric MRI (mpMRI) as a non-invasive imaging tool is important in prostate cancer (PCa) detection and localization. Combined with radiomics analysis, features extracted from mpMRI have been utilized to predict PCa aggressiveness. T2 mapping provides quantitative information in PCa diagnoses but is not routinely available in clinical practice.
View Article and Find Full Text PDFPurpose: To develop a self-supervised learning method to retrospectively estimate T and T values from clinical weighted MRI.
Methods: A self-supervised learning approach was constructed to estimate T, T, and proton density maps from conventional T- and T-weighted images. MR physics models were employed to regenerate the weighted images from the network outputs, and the network was optimized based on loss calculated between the synthesized and input weighted images, alongside additional constraints based on prior information.
Purpose: To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images.
Methods: Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images.
Purpose: To develop a deep learning method to synthesize conventional contrast-weighted images in the brain from MR multitasking spatial factors.
Methods: Eighteen subjects were imaged using a whole-brain quantitative T -T -T MR multitasking sequence. Conventional contrast-weighted images consisting of T MPRAGE, T gradient echo, and T fluid-attenuated inversion recovery were acquired as target images.
Automatic segmentation of skin lesions is crucial for diagnosing and treating skin diseases. Although current medical image segmentation methods have significantly improved the results of skin lesion segmentation, the following major challenges still affect the segmentation performance: (i) segmentation targets have irregular shapes and diverse sizes and (ii) low contrast or blurred boundaries between lesions and background. To address these issues, this study proposes a Gated Fusion Attention Network (GFANet) which designs two progressive relation decoders to accurately segment skin lesions images.
View Article and Find Full Text PDFPurpose: To develop a deep-learning-based method to quantify multiple parameters in the brain from conventional contrast-weighted images.
Methods: Eighteen subjects were imaged using an MR Multitasking sequence to generate reference T and T maps in the brain. Conventional contrast-weighted images consisting of T MPRAGE, T GRE, and T FLAIR were acquired as input images.