In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense.
View Article and Find Full Text PDFModel selection is a core aspect in machine learning and is, occasionally, multi-objective in nature. For instance, hyper-parameter selection in a multi-task learning context is of multi-objective nature, since all the tasks' objectives must be optimized simultaneously. In this paper, a novel multi-objective racing algorithm (RA), namely S-Race, is put forward.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2016
The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al.
View Article and Find Full Text PDFA traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
July 2015
This paper presents a pair of hypothesis spaces (HSs) of vector-valued functions intended to be used in the context of multitask classification. While both are parameterized on the elements of reproducing kernel Hilbert spaces and impose a feature mapping that is common to all tasks, one of them assumes this mapping as fixed, while the more general one learns the mapping via multiple kernel learning. For these new HSs, empirical Rademacher complexity-based generalization bounds are derived, and are shown to be tighter than the bound of a particular HS, which has appeared recently in the literature, leading to improved performance.
View Article and Find Full Text PDFExisting active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows the algorithm to converge. However, the algorithm implementation is likely to be more complex and less computationally efficient than would otherwise be required for an algorithm that does not have to contend with the singularities.
View Article and Find Full Text PDFThis paper focuses on the evolution of Fuzzy ARTMAP neural network classifiers, using genetic algorithms, with the objective of improving generalization performance (classification accuracy of the ART network on unseen test data) and alleviating the ART category proliferation problem (the problem of creating more than necessary ART network categories to solve a classification problem). We refer to the resulting architecture as GFAM. We demonstrate through extensive experimentation that GFAM exhibits good generalization and is of small size (creates few ART categories), while consuming reasonable computational effort.
View Article and Find Full Text PDFProbabilistic neural networks (PNN) and general regression neural networks (GRNN) represent knowledge by simple but interpretable models that approximate the optimal classifier or predictor in the sense of expected value of the accuracy. These models require the specification of an important smoothing parameter, which is usually chosen by cross-validation or clustering. In this article, we demonstrate the problems with the cross-validation and clustering approaches to specify the smoothing parameter, discuss the relationship between this parameter and some of the data statistics, and attempt to develop a fast approach to determine the optimal value of this parameter.
View Article and Find Full Text PDFFuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is, Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data are of the noisy and/or overlapping nature.
View Article and Find Full Text PDFFuzzy ARTMAP neural networks have been proven to be good classifiers on a variety of classification problems. However, the time that Fuzzy ARTMAP takes to converge to a solution increases rapidly as the number of patterns used for training is increased. In this paper we examine the time Fuzzy ARTMAP takes to converge to a solution and we propose a coarse grain parallelization technique, based on a pipeline approach, to speed-up the training process.
View Article and Find Full Text PDFIEEE Trans Syst Man Cybern B Cybern
February 2006
It is widely accepted that the difficulty and expense involved in acquiring the knowledge behind tactical behaviors has been one limiting factor in the development of simulated agents representing adversaries and teammates in military and game simulations. Several researchers have addressed this problem with varying degrees of success. The problem mostly lies in the fact that tactical knowledge is difficult to elicit and represent through interactive sessions between the model developer and the subject matter expert.
View Article and Find Full Text PDFIn this paper, several modifications to the Fuzzy ARTMAP neural network architecture are proposed for conducting classification in complex, possibly noisy, environments. The goal of these modifications is to improve upon the generalization performance of Fuzzy ART-based neural networks, such as Fuzzy ARTMAP, in these situations. One of the major difficulties of employing Fuzzy ARTMAP on such learning problems involves over-fitting of the training data.
View Article and Find Full Text PDFThe Fuzzy ARTMAP algorithm has been proven to be one of the premier neural network architectures for classification problems. One of the properties of Fuzzy ARTMAP, which can be both an asset and a liability, is its capacity to produce new nodes (templates) on demand to represent classification categories. This property allows Fuzzy ARTMAP to automatically adapt to the database without having to a priori specify its network size.
View Article and Find Full Text PDFIn this paper we introduce novel geometric concepts, namely category regions, in the original framework of Fuzzy-ART (FA) and Fuzzy-ARTMAP (FAM). The definitions of these regions are based on geometric interpretations of the vigilance test and the F2 layer competition of committed nodes with uncommitted ones, that we call commitment test. It turns out that not only these regions have the same geometrical shape (polytope structure), but they also share a lot of common and interesting properties that are demonstrated in this paper.
View Article and Find Full Text PDFThis paper focuses on two ART architectures, the Fuzzy ART and the Fuzzy ARTMAP. Fuzzy ART is a pattern clustering machine, while Fuzzy ARTMAP is a pattern classification machine. Our study concentrates on the order according to which categories in Fuzzy ART, or the ART(a) model of Fuzzy ARTMAP are chosen.
View Article and Find Full Text PDF