Publications by authors named "G V Giannakis"

Parallel transmission (pTx) is an important technique for reducing transmit field inhomogeneities at ultrahigh-field (UHF) MRI. pTx typically involves solving an optimization problem for radiofrequency pulse design, with hard constraints on specific-absorption rate (SAR) and/or power, which may be time-consuming. In this work, we propose a novel approach towards incorporating hard constraints to physics-driven neural networks.

View Article and Find Full Text PDF

Bayesian optimization (BO) has well-documented merits for optimizing black-box functions with an expensive evaluation cost. Such functions emerge in applications as diverse as hyperparameter tuning, drug discovery, and robotics. BO hinges on a Bayesian surrogate model to sequentially select query points so as to balance exploration with exploitation of the search space.

View Article and Find Full Text PDF

Belonging to the family of Bayesian nonparametrics, Gaussian process (GP) based approaches have well-documented merits not only in learning over a rich class of nonlinear functions, but also in quantifying the associated uncertainty. However, most GP methods rely on a single preselected kernel function, which may fall short in characterizing data samples that arrive sequentially in time-critical applications. To enable online kernel adaptation, the present work advocates an incremental ensemble (IE-) GP framework, where an EGP assembler employs an ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.

View Article and Find Full Text PDF

This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. Specifically, the federated learning builds upon the server-worker infrastructure, where the workers calculate local gradients and upload them to the server; then the server obtain the global gradient by aggregating all the local gradients and utilizes it to update the model parameter. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients.

View Article and Find Full Text PDF

Graph convolutional networks (GCNs) have well-documented performance in various graph learning tasks, but their analysis is still at its infancy. Graph scattering transforms (GSTs) offer training-free deep GCN models that extract features from graph data, and are amenable to generalization and stability analyses. The price paid by GSTs is exponential complexity in space and time that increases with the number of layers.

View Article and Find Full Text PDF