The on-the-fly generation of machine-learning force fields by active-learning schemes attracts a great deal of attention in the community of atomistic simulations. The algorithms allow the machine to self-learn an interatomic potential and construct machine-learned models on the fly during simulations. State-of-the-art query strategies allow the machine to judge whether new structures are out of the training data set or not. Only when the machine judges the necessity of updating the data set with the new structures are first-principles calculations carried out. Otherwise, the yet available machine-learned model is used to update the atomic positions. In this manner, most of the first-principles calculations are bypassed during training, and overall, simulations are accelerated by several orders of magnitude while retaining almost first-principles accuracy. In this Perspective, after describing essential components of the active-learning algorithms, we demonstrate the power of the schemes by presenting recent applications.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1021/acs.jpclett.0c01061 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!