Sparse ℓ- and ℓ-Center Classifiers.

IEEE Trans Neural Netw Learn Syst

Published: March 2022

In this article, we discuss two novel sparse versions of the classical nearest-centroid classifier. The proposed sparse classifiers are based on l and l distance criteria, respectively, and perform simultaneous feature selection and classification, by detecting the features that are most relevant for the classification purpose. We formally prove that the training of the proposed sparse models, with both distance criteria, can be performed exactly (i.e., the globally optimal set of features is selected) at a linear computational cost. Especially, the proposed sparse classifiers are trained in O(mn)+O(mlogk) operations, where n is the number of samples, m is the total number of features, and k ≤ m is the number of features to be retained in the classifier. Furthermore, the complexity of testing and classifying a new sample is simply O(k) for both methods. The proposed models can be employed either as stand-alone sparse classifiers or fast feature-selection techniques for prefiltering the features to be later fed to other types of classifiers (e.g., SVMs). The experimental results show that the proposed methods are competitive in accuracy with state-of-the-art feature selection and classification techniques while having a substantially lower computational cost.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2020.3036838DOI Listing

Publication Analysis

Top Keywords

proposed sparse
12
sparse classifiers
12
distance criteria
8
feature selection
8
selection classification
8
computational cost
8
number features
8
sparse
6
classifiers
5
proposed
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!