Empirical parametrization underpins many scientific methodologies including certain quantum-chemistry protocols [e.g., density functional theory (DFT), machine-learning (ML) models]. In some cases, the fitting requires a large amount of data, necessitating the use of data obtained using low-cost, and thus low-quality, means. Here we examine the effect of using low-quality data on the resulting method in the context of DFT methods. We use multiple G2/97 data sets of different qualities to fit the DFT-type methods. Encouragingly, this fitting can tolerate a relatively large proportion of low-quality fitting data, which may be attributed to the physical foundations of the DFT models and the use of a modest number of parameters. Further examination using "ML-quality" data shows that adding a large amount of low-quality data to a small number of high-quality ones may not offer tangible benefits. On the other hand, when the high-quality data is limited in scope, diversification by a modest amount of low-quality data improves the performance. Quantitatively, for parametrizing DFT (and perhaps also quantum-chemistry ML models), caution should be taken when more than 50% of the fitting set contains questionable data, and that the average error of the full set is more than 20 kJ mol. One may also follow the recently proposed transferability principles to ensure diversity in the fitting set.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1021/acs.jctc.4c01063 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!