Background: Comprehensive two-dimensional chromatography generates complex data sets, and numerous baseline correction and noise removal algorithms have been proposed in the past decade to address this challenge. However, evaluating their performance objectively is currently not possible due to a lack of objective data.
Result: To tackle this issue, we introduce a versatile platform that models and reconstructs single-trace two-dimensional chromatography data, preserving peak parameters. This approach balances real experimental data with synthetic data for precise comparisons. We achieve this by employing a Skewed Lorentz-Normal model to represent each peak and creating probability distributions for relevant parameter sampling. The model's performance has been showcased through its application to two-dimensional gas chromatography data where it has created a data set with 458 peaks with an RMSE of 0.0048 or lower and minimal residuals compared to the original data. Additionally, the same process has been shown in liquid chromatography data.
Significance: Data analysis is an integral component of any analytical method. The development of new data processing strategies is of paramount importance to tackle the complex signals generated by state-of-the-art separation technology. Through the use of probability distributions, quantitative assessment of algorithm performance of new algorithms is now possible. Therefore, creating new opportunities for faster, more accurate, and simpler data analysis development.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.aca.2024.342724 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!