In recent years, deep learning has gained popularity for its ability to solve complex classification tasks. It provides increasingly better results thanks to the development of more accurate models, the availability of huge volumes of data and the improved computational capabilities of modern computers. However, these improvements in performance also bring efficiency problems, related to the storage of datasets and models, and to the waste of energy and time involved in both the training and inference processes. In this context, data reduction can help reduce energy consumption when training a deep learning model. In this paper, we present up to eight different methods to reduce the size of a tabular training dataset, and we develop a Python package to apply them. We also introduce a representativeness metric based on topology to measure the similarity between the reduced datasets and the full training dataset. Additionally, we develop a methodology to apply these data reduction methods to image datasets for object detection tasks. Finally, we experimentally compare how these data reduction methods affect the representativeness of the reduced dataset, the energy consumption and the predictive performance of the model.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413558PMC
http://dx.doi.org/10.12688/openreseurope.17554.2DOI Listing

Publication Analysis

Top Keywords

data reduction
16
reduction methods
12
deep learning
12
energy consumption
8
training dataset
8
data
5
in-depth analysis
4
analysis data
4
reduction
4
methods
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!