Publications by authors named "Kevin Ryczko"

We introduce a deep neural network (DNN) framework called theeal-spacetomicecompositionwork (radnet), which is capable of making accurate predictions of polarization and of electronic dielectric permittivity tensors in solids and aims to address limitations of previously available machine learning models for Raman predictions in periodic systems. This framework builds on previous, atom-centered approaches while utilizing deep convolutional neural networks. We report excellent accuracies on direct predictions for two prototypical examples: GaAs and BN.

View Article and Find Full Text PDF

We present a high-throughput, end-to-end pipeline for organic crystal structure prediction (CSP)─the problem of identifying the stable crystal structures that will form from a given molecule based only on its molecular composition. Our tool uses neural network potentials to allow for efficient screening and structural relaxation of generated crystal candidates. Our pipeline consists of two distinct stages: random search, whereby crystal candidates are randomly generated and screened, and optimization, where a genetic algorithm (GA) optimizes this screened population.

View Article and Find Full Text PDF

We present two machine learning methodologies that are capable of predicting diffusion Monte Carlo (DMC) energies with small data sets (≈60 DMC calculations in total). The first uses voxel deep neural networks (VDNNs) to predict DMC energy densities using Kohn-Sham density functional theory (DFT) electron densities as input. The second uses kernel ridge regression (KRR) to predict atomic contributions to the DMC total energy using atomic environment vectors as input (we used atom-centered symmetry functions, atomic environment vectors from the ANI models, and smooth overlap of atomic positions).

View Article and Find Full Text PDF

We use voxel deep neural networks to predict energy densities and functional derivatives of electron kinetic energies for the Thomas-Fermi model and Kohn-Sham density functional theory calculations. We show that the ground-state electron density can be found via direct minimization for a graphene lattice without any projection scheme using a voxel deep neural network trained with the Thomas-Fermi model. Additionally, we predict the kinetic energy of a graphene lattice within chemical accuracy after training from only two Kohn-Sham density functional theory (DFT) calculations.

View Article and Find Full Text PDF

We propose a neural evolution structure (NES) generation methodology combining artificial neural networks and evolutionary algorithms to generate high entropy alloy structures. Our inverse design approach is based on pair distribution functions and atomic properties and allows one to train a model on smaller unit cells and then generate a larger cell. With a speed-up factor of ∼1000 with respect to the special quasi-random structures (SQSs), the NESs dramatically reduce computational costs and time, making possible the generation of very large structures (over 40 000 atoms) in few hours.

View Article and Find Full Text PDF

We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with scaling. We use a form of domain decomposition for training and inference, where each sub-domain (tile) is comprised of a non-overlapping focus region surrounded by an overlapping context region. The size of these regions is motivated by the physical interaction length scales of the problem.

View Article and Find Full Text PDF