The emergence of artificial intelligence is profoundly impacting computational chemistry, particularly through machine-learning interatomic potentials (MLIPs). Unlike traditional potential energy surface representations, MLIPs overcome the conventional computational scaling limitations by offering an effective combination of accuracy and efficiency for calculating atomic energies and forces to be used in molecular simulations. These MLIPs have significantly enhanced molecular simulations across various applications, including large-scale simulations of materials, interfaces, chemical reactions, and beyond. Despite these advances, the construction of training datasets-a critical component for the accuracy of MLIPs-has not received proportional attention, especially in the context of chemical reactivity, which depends on rare barrier-crossing events that are not easily included in the datasets. Here we address this gap by introducing ArcaNN, a comprehensive framework designed for generating training datasets for reactive MLIPs. ArcaNN employs a concurrent learning approach combined with advanced sampling techniques to ensure an accurate representation of high-energy geometries. The framework integrates automated processes for iterative training, exploration, new configuration selection, and energy and force labeling, all while ensuring reproducibility and documentation. We demonstrate ArcaNN's capabilities through two paradigm reactions: a nucleophilic substitution and a Diels-Alder reaction. These examples showcase its effectiveness, the uniformly low error of the resulting MLIP everywhere along the chemical reaction coordinate, and its potential for broad applications in reactive molecular dynamics. Finally, we provide guidelines for assessing the quality of MLIPs in reactive systems.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11563209 | PMC |
http://dx.doi.org/10.1039/d4dd00209a | DOI Listing |
J Chem Phys
December 2024
Pritzker School of Molecular Engineering, The University of Chicago, Chicago, Illinois 60637, USA.
Machine learning interatomic potentials (MLIPs) are rapidly gaining interest for molecular modeling, as they provide a balance between quantum-mechanical level descriptions of atomic interactions and reasonable computational efficiency. However, questions remain regarding the stability of simulations using these potentials, as well as the extent to which the learned potential energy function can be extrapolated safely. Past studies have encountered challenges when MLIPs are applied to classical benchmark systems.
View Article and Find Full Text PDFJ Chem Phys
December 2024
Computational Science Research Center, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea.
Graph neural network interatomic potentials (GNN-IPs) are gaining significant attention due to their capability of learning from large datasets. Specifically, universal interatomic potentials based on GNN, usually trained with crystalline geometries, often exhibit remarkable extrapolative behavior toward untrained domains, such as surfaces and amorphous configurations. However, the origin of this extrapolation capability is not well understood.
View Article and Find Full Text PDFNat Comput Sci
December 2024
Google DeepMind, Mountain View, CA, USA.
Crystallization of amorphous precursors into metastable crystals plays a fundamental role in the formation of new matter, from geological to biological processes in nature to the synthesis and development of new materials in the laboratory. Reliably predicting the outcome of such a process would enable new research directions in these areas, but has remained beyond the reach of molecular modeling or ab initio methods. Here we show that candidates for the crystallization products of amorphous precursors can be predicted in many inorganic systems by sampling the local structural motifs at the atomistic level using universal deep learning interatomic potentials.
View Article and Find Full Text PDFJ Phys Condens Matter
December 2024
Département de physique et Institut Courtois, Université de Montréal, C.P. 6128, succursale Centre-ville, Montreal, Quebec, H3C 3J7, CANADA.
We introduce a machine learning prediction workflow to study the impact of defects on the Raman response of 2D materials. By combining the use of machine-learned interatomic potentials, the Raman-active $\Gamma$-weighted density of states method and splitting configurations in independant patches, we are able to reach simulation sizes in the tens of thousands of atoms, with diagonalization now being the main bottleneck of the simulation. We apply the method to two systems, isotopic graphene and defective hexagonal boron nitride, and compare our predicted Raman response to experimental results, with good agreement.
View Article and Find Full Text PDFJ Phys Chem Lett
December 2024
Department of Mechanical Engineering & Materials Science, University of Pittsburgh, Pittsburgh, Pennsylvania 15261, United States.
Machine learning interatomic potentials, particularly ones based on deep neural networks, have taken significant strides in accelerating first-principles simulations, expanding the length and time scales of the simulations with accuracies akin to first-principles simulations. Notwithstanding their success in accurately describing the physical properties of pristine ionic systems with multiple oxidation states, herein we show that an implementation of deep neural network potentials (DNPs) yield vacancy formation energies in MgO with a significant ∼3 eV error. In contrast, we show that moment tensor potentials can accurately describe all properties of the oxide, including vacancy formation energies.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!