In this brief, kernel principal component analysis (KPCA) is reinterpreted as the solution to a convex optimization problem. Actually, there is a constrained convex problem for each principal component, so that the constraints guarantee that the principal component is indeed a solution, and not a mere saddle point. Although these insights do not imply any algorithmic improvement, they can be used to further understand the method, formulate possible extensions, and properly address them. As an example, a new convex optimization problem for semisupervised classification is proposed, which seems particularly well suited whenever the number of known labels is small. Our formulation resembles a least squares support vector machine problem with a regularization parameter multiplied by a negative sign, combined with a variational principle for KPCA. Our primal optimization principle for semisupervised learning is solved in terms of the Lagrange multipliers. Numerical experiments in several classification tasks illustrate the performance of the proposed model in problems with only a few labeled data.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2017.2709838 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!