Diffusion-based generative models represent a forefront direction in generative artificial intelligence (AI) research today. Recent studies in physics have suggested that the renormalization group (RG) can be conceptualized as a diffusion process. This insight motivates us to develop a diffusion-based generative model by reversing the momentum-space RG flow.
View Article and Find Full Text PDFCuration of large, diverse MRI datasets via multi-institutional collaborations can help improve learning of generalizable synthesis models that reliably translate source- onto target-contrast images. To facilitate collaborations, federated learning (FL) adopts decentralized model training while mitigating privacy concerns by avoiding sharing of imaging data. However, conventional FL methods can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident within and across imaging sites.
View Article and Find Full Text PDFWe consider synchronous data-parallel neural network training with a fixed large batch size. While the large batch size provides a high degree of parallelism, it degrades the generalization performance due to the low gradient noise scale. We propose a general learning rate adjustment framework and three critical heuristics that tackle the poor generalization issue.
View Article and Find Full Text PDFIEEE Trans Med Imaging
July 2023
Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator.
View Article and Find Full Text PDFWe study the problem of communicating a distributed correlated memoryless source over a memoryless network, from source nodes to destination nodes, under quadratic distortion constraints. We establish the following two complementary results: 1) for an arbitrary memoryless network, among all distributed memoryless sources of a given correlation, Gaussian sources are least compressible, that is, they admit the smallest set of achievable distortion tuples and 2) for any memoryless source to be communicated over a memoryless additive-noise network, among all noise processes of a given correlation, Gaussian noise admits the smallest achievable set of distortion tuples. We establish these results constructively by showing how schemes for the corresponding Gaussian problems can be applied to achieve similar performance for (source or noise) distributions that are not necessarily Gaussian but have the same covariance.
View Article and Find Full Text PDFDetermining the susceptibility distribution from the magnetic field measured in a magnetic resonance (MR) scanner is an ill-posed inverse problem, because of the presence of zeroes in the convolution kernel in the forward problem. An algorithm called morphology enabled dipole inversion (MEDI), which incorporates spatial prior information, has been proposed to generate a quantitative susceptibility map (QSM). The accuracy of QSM can be validated experimentally.
View Article and Find Full Text PDF