Generative Adversarial Networks (GANs) have shown immense potential in fields such as text and image generation. Only very recently attempts to exploit GANs to statistical-mechanics models have been reported. Here we quantitatively test this approach by applying it to a prototypical stochastic process on a lattice. By suitably adding noise to the original data we succeed in bringing both the Generator and the Discriminator loss functions close to their ideal value. Importantly, the discreteness of the model is retained despite the noise. As typical for adversarial approaches, oscillations around the convergence limit persist also at large epochs. This undermines model selection and the quality of the generated trajectories. We demonstrate that a simple multi-model procedure where stochastic trajectories are advanced at each step upon randomly selecting a Generator leads to a remarkable increase in accuracy. This is illustrated by quantitative analysis of both the predicted equilibrium probability distribution and of the escape-time distribution. Based on the reported findings, we believe that GANs are a promising tool to tackle complex statistical dynamics by machine learning techniques.

Download full-text PDF

Source
http://dx.doi.org/10.1063/5.0170307DOI Listing

Publication Analysis

Top Keywords

generative adversarial
8
adversarial networks
8
accurate generation
4
generation stochastic
4
stochastic dynamics
4
dynamics based
4
based multi-model
4
multi-model generative
4
networks generative
4
networks gans
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!