While some materials can be discovered and engineered using standalone self-driving workflows, coordinating multiple stakeholders and workflows toward a common goal could advance autonomous experimentation (AE) for accelerated materials discovery (AMD). Here, we describe a scalable AMD paradigm based on AE and "collaborative learning". Collaborative learning using a novel consensus Bayesian optimization (BO) model enabled the rapid discovery of mechanically optimized composite polysaccharide hydrogels.
View Article and Find Full Text PDFThis study aims to address optimization and operational challenges in multi-energy coupled microgrids to enhance system stability and reliability. After analyzing the requirements of such systems within comprehensive energy systems, an improved fireworks algorithm (IFWA) is proposed. This algorithm combines an adaptive resource allocation strategy with a community genetic strategy, automatically adjusting explosion range and spark quantity based on individual optimization status to meet actual needs.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
January 2024
In this paper, we propose FGPR: a Federated Gaussian process ( GP) regression framework that uses an averaging strategy for model aggregation and stochastic gradient descent for local computations. Notably, the resulting global model excels in personalization as FGPR jointly learns a shared prior across all devices. The predictive posterior is then obtained by exploiting this shared prior and conditioning on local data, which encodes personalized features from a specific dataset.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
December 2023
Differential equations are fundamental in modeling numerous physical systems, including thermal, manufacturing, and meteorological systems. Traditionally, numerical methods often approximate the solutions of complex systems modeled by differential equations. With the advent of modern deep learning, Physics-informed Neural Networks (PINNs) are evolving as a new paradigm for solving differential equations with a pseudo-closed form solution.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2024
In an effort to improve generalization in deep learning and automate the process of learning rate scheduling, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers. Our method dynamically updates the learning rate of gradient-based optimizers based on the local sharpness of the loss function. This allows optimizers to automatically increase learning rates at sharp valleys to increase the chance of escaping them.
View Article and Find Full Text PDF