Probabilistic linear discriminant analysis (PLDA) is a very effective feature extraction approach and has obtained extensive and successful applications in supervised learning tasks. It employs the squared L -norm to measure the model errors, which assumes a Gaussian noise distribution implicitly. However, the noise in real-life applications may not follow a Gaussian distribution. Particularly, the squared L -norm could extremely exaggerate data outliers. To address this issue, this article proposes a robust PLDA model under the assumption of a Laplacian noise distribution, called L1-PLDA. The learning process employs the approach by expressing the Laplacian density function as a superposition of an infinite number of Gaussian distributions via introducing a new latent variable and then adopts the variational expectation-maximization (EM) algorithm to learn parameters. The most significant advantage of the new model is that the introduced latent variable can be used to detect data outliers. The experiments on several public databases show the superiority of the proposed L1-PLDA model in terms of classification and outlier detection.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2020.2985997 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!