In this work we examine some of the problems associated with the development of machine learning models with the objective to achieve robust generalization capabilities on common-task multiple-database scenarios. Referred to as the "database variability problem", we focus on a specific medical domain (sleep staging in sleep medicine) to show the non-triviality of translating the estimated model's local generalization capabilities into independent external databases. We analyze some of the scalability problems when multiple-database data are used as inputs to train a single learning model. Then, we introduce a novel approach based on an ensemble of local models, and we show its advantages in terms of inter-database generalization performance and data scalability. In addition, we analyze different model configurations and data pre-processing techniques to determine their effects on the overall generalization performance. For this purpose, we carry out experimentation that involves several sleep databases and evaluates different machine learning models based on convolutional neural networks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2020.103697 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!