Feature selection is a critical component of machine learning and data mining which addresses challenges like irrelevance, noise, redundancy in large-scale data etc., which often result in the curse of dimensionality. This study employs a K-nearest neighbour wrapper to implement feature selection using six nature-inspired algorithms, derived from human behaviour and mammal-inspired techniques. Evaluated on six real-world datasets, the study aims to compare the performance of these algorithms in terms of accuracy, feature count, fitness, convergence and computational cost. The findings underscore the efficacy of the Human Learning Optimization, Poor and Rich Optimization and Grey Wolf Optimizer algorithms across multiple performance metrics. For instance, for mean fitness, Human Learning Optimization outperforms the others, followed by Poor and Rich Optimization and Harmony Search. The study suggests the potential of human-inspired algorithms, particularly Poor and Rich Optimization, in robust feature selection without compromising classification accuracy.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10770462 | PMC |
http://dx.doi.org/10.1016/j.heliyon.2023.e23571 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!