Publications by authors named "Stephen Whitelam"

In the limit of small trial moves the Metropolis Monte Carlo algorithm is equivalent to gradient descent on the energy function in the presence of Gaussian white noise. This observation was originally used to demonstrate a correspondence between Metropolis Monte Carlo moves of model molecules and overdamped Langevin dynamics, but it also applies in the context of training a neural network: making small random changes to the weights of a neural network, accepted with the Metropolis probability, with the loss function playing the role of energy, has the same effect as training by explicit gradient descent in the presence of Gaussian white noise. We explore this correspondence in the context of a simple recurrent neural network.

View Article and Find Full Text PDF

Exact analytic calculation shows that optimal control protocols for passive molecular systems often involve rapid variations and discontinuities. However, similar analytic baselines are not generally available for active-matter systems, because it is more difficult to treat active systems exactly. Here we use machine learning to derive efficient control protocols for active-matter systems, and find that they are characterized by sharp features similar to those seen in passive systems.

View Article and Find Full Text PDF

The Jarzynski equality allows the calculation of free-energy differences using values of work measured from nonequilibrium trajectories. The number of trajectories required to accurately estimate free-energy differences in this way grows sharply with the size of work fluctuations, motivating the search for protocols that perform desired transformations with minimum work. However, protocols of this nature can involve varying temperature, to which the Jarzynski equality does not apply.

View Article and Find Full Text PDF

In this work we investigate the behaviour of molecules at the nanoscale using scanning tunnelling microscopy in order to explore the origin of the cooperativity in the formation of self-assembled molecular networks (SAMNs) at the liquid/solid interface. By studying concentration dependence of alkoxylated dimethylbenzene, a molecular analogue to 5-alkoxylated isophthalic derivatives, but without hydrogen bonding moieties, we show that the cooperativity effect can be experimentally evaluated even for low-interacting systems and that the cooperativity in SAMN formation is its fundamental trait. We conclude that cooperativity must be a local effect and use the nearest-neighbor Ising model to reproduce the coverage concentration curves.

View Article and Find Full Text PDF

We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training. We consider a lattice model of active matter undergoing continuous-time Monte Carlo dynamics, simulated at a density at which its steady state comprises small, dispersed clusters. We train a neural network called a transformer on a single trajectory of the model.

View Article and Find Full Text PDF

Time-dependent protocols that perform irreversible logical operations, such as memory erasure, cost work and produce heat, placing bounds on the efficiency of computers. Here we use a prototypical computer model of a physical memory to show that it is possible to learn feedback-control protocols to do fast memory erasure without input of work or production of heat. These protocols, which are enacted by a neural-network "demon," do not violate the second law of thermodynamics because the demon generates more heat than the memory absorbs.

View Article and Find Full Text PDF

We show that cellular automata can classify data by inducing a form of dynamical phase coexistence. We use Monte Carlo methods to search for general two-dimensional deterministic automata that classify images on the basis of activity, the number of state changes that occur in a trajectory initiated from the image. When the number of time steps of the automaton is a trainable parameter, the search scheme identifies automata that generate a population of dynamical trajectories displaying high or low activity, depending on initial conditions.

View Article and Find Full Text PDF

We use neuroevolutionary learning to identify time-dependent protocols for low-dissipation self-assembly in a model of generic active particles with interactions. When the time allotted for assembly is sufficiently long, low-dissipation protocols use only interparticle attractions, producing an amount of entropy that scales as the number of particles. When time is too short to allow assembly to proceed via diffusive motion, low-dissipation assembly protocols instead require particle self-propulsion, producing an amount of entropy that scales with the number of particles and the swim length required to cause assembly.

View Article and Find Full Text PDF

Phillip L. Geissler made important contributions to the statistical mechanics of biological polymers, heterogeneous materials, and chemical dynamics in aqueous environments. He devised analytical and computational methods that revealed the underlying organization of complex systems at the frontiers of biology, chemistry, and materials science.

View Article and Find Full Text PDF

Nanocapsules are hollow nanoscale shells that have applications in drug delivery, batteries, self-healing materials, and as model systems for naturally occurring shell geometries. In many applications, nanocapsules are designed to release their cargo as they buckle and collapse, but the details of this transient buckling process have not been directly observed. Here, we use liquid-phase transmission electron microscopy to record the electron-irradiation-induced buckling in spherical 60-187 nm polymer capsules with ∼3.

View Article and Find Full Text PDF

Using a model heat engine, we show that neural-network-based reinforcement learning can identify thermodynamic trajectories of maximal efficiency. We consider both gradient and gradient-free reinforcement learning. We use an evolutionary learning algorithm to evolve a population of neural networks, subject to a directive to maximize the efficiency of a trajectory composed of a set of elementary thermodynamic processes; the resulting networks learn to carry out the maximally efficient Carnot, Stirling, or Otto cycles.

View Article and Find Full Text PDF

We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations, for shallow and deep neural networks.

View Article and Find Full Text PDF

We introduce a minimal model of solid-forming anisotropic molecules that displays, in thermal equilibrium, surface orientational order without bulk orientational order. The model reproduces the nonequilibrium behavior of recent experiments in which a bulk nonequilibrium structure grown by deposition contains regions of orientational order characteristic of the surface equilibrium. This order is deposited, in general, in a nonuniform way because of the emergence of a growth-poisoning mechanism that causes equilibrated surfaces to grow slower than non-equilibrated surfaces.

View Article and Find Full Text PDF

We use a neural-network ansatz originally designed for the variational optimization of quantum systems to study dynamical large deviations in classical ones. We use recurrent neural networks to describe the large deviations of the dynamical activity of model glasses, kinetically constrained models in two dimensions. We present the first finite size-scaling analysis of the large-deviation functions of the two-dimensional Fredrickson-Andersen model, and explore the spatial structure of the high-activity sector of the South-or-East model.

View Article and Find Full Text PDF

Within simulations of molecules deposited on a surface we show that neuroevolutionary learning can design particles and time-dependent protocols to promote self-assembly, without input from physical concepts such as thermal equilibrium or mechanical stability and without prior knowledge of candidate or competing structures. The learning algorithm is capable of both directed and exploratory design: it can assemble a material with a user-defined property, or search for novelty in the space of specified order parameters. In the latter mode it explores the space of what can be made, rather than the space of structures that are low in energy but not necessarily kinetically accessible.

View Article and Find Full Text PDF

Diamine-appended metal-organic frameworks (MOFs) of the form Mg(dobpdc)(diamine) adsorb CO in a cooperative fashion, exhibiting an abrupt change in CO occupancy with pressure or temperature. This change is accompanied by hysteresis. While hysteresis is suggestive of a first-order phase transition, we show that hysteretic temperature-occupancy curves associated with this material are qualitatively unlike the curves seen in the presence of a phase transition; they are instead consistent with CO chain polymerization, within one-dimensional channels in the MOF, in the absence of a phase transition.

View Article and Find Full Text PDF

Natural and artificial proteins with designer properties and functionalities offer unparalleled opportunity for functional nanoarchitectures formed through self-assembly. However, to exploit this potential we need to design the system such that assembly results in desired architecture forms while avoiding denaturation and therefore retaining protein functionality. Here we address this challenge with a model system of fluorescent proteins.

View Article and Find Full Text PDF

Singularities of dynamical large-deviation functions are often interpreted as the signal of a dynamical phase transition and the coexistence of distinct dynamical phases, by analogy with the correspondence between singularities of free energies and equilibrium phase behavior. Here we study models of driven random walkers on a lattice. These models display large-deviation singularities in the limit of large lattice size, but the extent to which each model's phenomenology resembles a phase transition depends on the details of the driving.

View Article and Find Full Text PDF

A conceptually simple way to classify images is to directly compare test-set data and training-set data. The accuracy of this approach is limited by the method of comparison used, and by the extent to which the training-set data cover configuration space. Here we show that this coverage can be substantially increased using coarse-graining (replacing groups of images by their centroids) and stochastic sampling (using distinct sets of centroids in combination).

View Article and Find Full Text PDF

Since the pioneering work of Ned Seeman in the early 1980s, the use of the DNA molecule as a construction material experienced a rapid growth and led to the establishment of a new field of science, nowadays called structural DNA nanotechnology. Here, the self-recognition properties of DNA are employed to build micrometer-large molecular objects with nanometer-sized features, thus bridging the nano- to the microscopic world in a programmable fashion. Distinct design strategies and experimental procedures have been developed over the years, enabling the realization of extremely sophisticated structures with a level of control that approaches that of natural macromolecular assemblies.

View Article and Find Full Text PDF

We show how to bound and calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of certain path-extensive quantities. Evolution produces progressively fitter agents, potentially allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity.

View Article and Find Full Text PDF

We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols. Presented with molecular simulation trajectories, networks learn to change temperature and chemical potential in order to promote the assembly of desired structures or choose between competing polymorphs. In the first case, networks reproduce in a qualitative sense the results of previously known protocols, but faster and with higher fidelity; in the second case they identify strategies previously unknown, from which we can extract physical insight.

View Article and Find Full Text PDF

Förster resonant energy transfer (FRET)-mediated exciton diffusion through artificial nanoscale building block assemblies could be used as an optoelectronic design element to transport energy. However, so far, nanocrystal (NC) systems supported only diffusion lengths of 30 nm, which are too small to be useful in devices. Here, we demonstrate a FRET-mediated exciton diffusion length of 200 nm with 0.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Warning

Message: fopen(/var/lib/php/sessions/ci_sessioniqqur0jevfecir4k1td8ocm7tahcnkki): Failed to open stream: No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 177

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: session_start(): Failed to read session data: user (path: /var/lib/php/sessions)

Filename: Session/Session.php

Line Number: 137

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once