Publications by authors named "Ioannis G. Kevrekidis"

Before we attempt to (approximately) learn a function between two sets of observables of a physical process, we must first decide what the and of the desired function are going to be. Here we demonstrate two distinct, data-driven ways of first deciding "the right quantities" to relate through such a function, and then proceeding to learn it. This is accomplished by first processing simultaneous heterogeneous data streams (ensembles of time series) from observations of a physical system: records of multiple of the system.

View Article and Find Full Text PDF

Background: E. coli chemotactic motion in the presence of a chemonutrient field can be studied using wet laboratory experiments or macroscale-level partial differential equations (PDEs) (among others). Bridging experimental measurements and chemotactic Partial Differential Equations requires knowledge of the evolution of all underlying fields, initial and boundary conditions, and often necessitates strong assumptions.

View Article and Find Full Text PDF

Deriving closed-form analytical expressions for reduced-order models, and judiciously choosing the closures leading to them, has long been the strategy of choice for studying phase- and noise-induced transitions for agent-based models (ABMs). In this paper, we propose a data-driven framework that pinpoints phase transitions for an ABM-the Desai-Zwanzig model-in its mean-field limit, using a smaller number of variables than traditional closed-form models. To this end, we use the manifold learning algorithm Diffusion Maps to identify a parsimonious set of data-driven latent variables, and we show that they are in one-to-one correspondence with the expected theoretical order parameter of the ABM.

View Article and Find Full Text PDF

Transformations are a key tool in the qualitative study of dynamical systems: transformations to a normal form, for example, underpin the study of instabilities and bifurcations. In this work, we test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class.

View Article and Find Full Text PDF

We study the tipping point collective dynamics of an adaptive susceptible-infected-susceptible (SIS) epidemiological network in a data-driven, machine learning-assisted manner. We identify a parameter-dependent effective stochastic differential equation (eSDE) in terms of physically meaningful coarse mean-field variables through a deep-learning ResNet architecture inspired by numerical stochastic integrators. We construct an approximate effective bifurcation diagram based on the identified drift term of the eSDE and contrast it with the mean-field SIS model bifurcation diagram.

View Article and Find Full Text PDF

We present a machine learning framework bridging manifold learning, neural networks, Gaussian processes, and Equation-Free multiscale approach, for the construction of different types of effective reduced order models from detailed agent-based simulators and the systematic multiscale numerical analysis of their emergent dynamics. The specific tasks of interest here include the detection of tipping points, and the uncertainty quantification of rare events near them. Our illustrative examples are an event-driven, stochastic financial market model describing the mimetic behavior of traders, and a compartmental stochastic epidemic model on an Erdös-Rényi network.

View Article and Find Full Text PDF

We propose a machine learning framework for the data-driven discovery of macroscopic chemotactic Partial Differential Equations (PDEs)-and the closures that lead to them- from high-fidelity, individual-based stochastic simulations of Escherichia coli bacterial motility. The fine scale, chemomechanical, hybrid (continuum-Monte Carlo) simulation model embodies the underlying biophysics, and its parameters are informed from experimental observations of individual cells. Using a parsimonious set of collective observables, we learn effective, coarse-grained "Keller-Segel class" chemotactic PDEs using machine learning regressors: (a) (shallow) feedforward neural networks and (b) Gaussian Processes.

View Article and Find Full Text PDF

Finding saddle points of dynamical systems is an important problem in practical applications, such as the study of rare events of molecular systems. Gentlest ascent dynamics (GAD) (10.1088/0951-7715/24/6/008) is one of a number of algorithms in existence that attempt to find saddle points.

View Article and Find Full Text PDF

It is shown that Machine Learning (ML) algorithms can usefully capture the effect of crystallization composition and conditions (inputs) on key microstructural characteristics (outputs) of faujasite type zeolites (structure types FAU, EMT, and their intergrowths), which are widely used zeolite catalysts and adsorbents. The utility of ML (in particular, Geometric Harmonics) toward learning input-output relationships of interest is demonstrated, and a comparison with Neural Networks and Gaussian Process Regression, as alternative approaches, is provided. Through ML, synthesis conditions were identified to enhance the Si/Al ratio of high purity FAU zeolite to the hitherto highest level (i.

View Article and Find Full Text PDF

Depolymerization is a promising strategy for recycling waste plastic into constituent monomers for subsequent repolymerization. However, many commodity plastics cannot be selectively depolymerized using conventional thermochemical approaches, as it is difficult to control the reaction progress and pathway. Although catalysts can improve the selectivity, they are susceptible to performance degradation.

View Article and Find Full Text PDF

We present a data-driven approach to learning surrogate models for amplitude equations and illustrate its application to interfacial dynamics of phase field systems. In particular, we demonstrate learning effective partial differential equations describing the evolution of phase field interfaces from full phase field data. We illustrate this on a model phase field system, where analytical approximate equations for the dynamics of the phase field interface (a higher-order eikonal equation and its approximation, the Kardar-Parisi-Zhang equation) are known.

View Article and Find Full Text PDF

We identify effective stochastic differential equations (SDEs) for coarse observables of fine-grained particle- or agent-based simulations; these SDEs then provide useful coarse surrogate models of the fine scale dynamics. We approximate the drift and diffusivity functions in these effective SDEs through neural networks, which can be thought of as effective stochastic ResNets. The loss function is inspired by, and embodies, the structure of established stochastic numerical integrators (here, Euler-Maruyama and Milstein); our approximations can thus benefit from backward error analysis of these underlying numerical schemes.

View Article and Find Full Text PDF

Circadian rhythmicity lies at the center of various important physiological and behavioral processes in mammals, such as sleep, metabolism, homeostasis, mood changes, and more. Misalignment of intrinsic neuronal oscillations with the external day-night cycle can disrupt such processes and lead to numerous disorders. In this work, we computationally determine the limits of circadian synchronization to external light signals of different frequency, duty cycle, and simulated amplitude.

View Article and Find Full Text PDF

We present a data-driven approach to characterizing nonidentifiability of a model's parameters and illustrate it through dynamic as well as steady kinetic models. By employing Diffusion Maps and their extensions, we discover the minimal combinations of parameters required to characterize the output behavior of a chemical system: a set of for the model. Furthermore, we introduce and use a Conformal Autoencoder Neural Network technique, as well as a kernel-based Jointly Smooth Function technique, to disentangle the parameter combinations that do not affect the output behavior from the ones that do.

View Article and Find Full Text PDF

Finding the dynamical law of observable quantities lies at the core of physics. Within the particular field of statistical mechanics, the generalized Langevin equation (GLE) comprises a general model for the evolution of observables covering a great deal of physical systems with many degrees of freedom and an inherently stochastic nature. Although formally exact, GLE brings its own great challenges.

View Article and Find Full Text PDF

Radiation exposure of healthy cells can halt cell cycle temporarily or permanently. In this work, we analyze the time evolution of p21 and p53 from two single cell datasets of retinal pigment epithelial cells exposed to several levels of radiation, and in particular, the effect of radiation on cell cycle arrest. Employing various quantification methods from signal processing, we show how p21 levels, and to a lesser extent p53 levels, dictate whether the cells are arrested in their cell cycle and how frequently these mitosis events are likely to occur.

View Article and Find Full Text PDF
Article Synopsis
  • This study presents a three-tier numerical framework that uses nonlinear manifold learning to improve the forecasting of high-dimensional time series by addressing the challenges of high dimensionality during model training.
  • The process includes three steps: embedding time series into a lower-dimensional space, constructing surrogate models for forecasting within that space, and lifting the forecasts back to the original high-dimensional space.
  • The approach is tested on various problems, including synthetic time series related to EEG signals, solutions of linear and nonlinear PDEs, and a real-world dataset of foreign exchange rates from 2001 to 2020.
View Article and Find Full Text PDF

We propose an approach to learn effective evolution equations for large systems of interacting agents. This is demonstrated on two examples, a well-studied system of coupled normal form oscillators and a biologically motivated example of coupled Hodgkin-Huxley-like neurons. For such types of systems there is no obvious space coordinate in which to learn effective evolution laws in the form of partial differential equations.

View Article and Find Full Text PDF

Conventional thermochemical syntheses by continuous heating under near-equilibrium conditions face critical challenges in improving the synthesis rate, selectivity, catalyst stability and energy efficiency, owing to the lack of temporal control over the reaction temperature and time, and thus the reaction pathways. As an alternative, we present a non-equilibrium, continuous synthesis technique that uses pulsed heating and quenching (for example, 0.02 s on, 1.

View Article and Find Full Text PDF

High-entropy nanoparticles have become a rapidly growing area of research in recent years. Because of their multielemental compositions and unique high-entropy mixing states (i.e.

View Article and Find Full Text PDF

In this work, we propose a method to learn multivariate probability distributions using sample path data from stochastic differential equations. Specifically, we consider temporally evolving probability distributions (e.g.

View Article and Find Full Text PDF

We propose the Poisson neural networks (PNNs) to learn Poisson systems and trajectories of autonomous systems from data. Based on the Darboux-Lie theorem, the phase flow of a Poisson system can be written as the composition of: 1) a coordinate transformation; 2) an extended symplectic map; and 3) the inverse of the transformation. In this work, we extend this result to the unknotted trajectories of autonomous systems.

View Article and Find Full Text PDF

Landscapes play an important role in many areas of biology, in which biological lives are deeply entangled. Here we discuss a form of landscape in evolutionary biology which takes into account (1) initial growth rates, (2) mutation rates, (3) resource consumption by organisms, and (4) cyclic changes in the resources with time. The long-term equilibrium number of surviving organisms as a function of these four parameters forms what we call a success landscape, a landscape we would claim is qualitatively different from fitness landscapes which commonly do not include mutations or resource consumption/changes in mapping genomes to the final number of survivors.

View Article and Find Full Text PDF

We present an approach, based on learning an intrinsic data manifold, for the initialization of the internal state values of long short-term memory (LSTM) recurrent neural networks, ensuring consistency with the initial observed input data. Exploiting the generalized synchronization concept, we argue that the converged, "mature" internal states constitute a function on this learned manifold. The dimension of this manifold then dictates the length of observed input time series data required for consistent initialization.

View Article and Find Full Text PDF

Large collections of coupled, heterogeneous agents can manifest complex dynamical behavior presenting difficulties for simulation and analysis. However, if the collective dynamics lie on a low-dimensional manifold, then the original agent-based model may be approximated with a simplified surrogate model on and near the low-dimensional space where the dynamics live. Analytically identifying such simplified models can be challenging or impossible, but here we present a data-driven coarse-graining methodology for discovering such reduced models.

View Article and Find Full Text PDF