IEEE Trans Neural Netw Learn Syst
July 2024
While large margin classifiers are originally an outcome of an optimization framework, support vectors (SVs) can be obtained from geometric approaches. This article presents advances in the use of Gabriel graphs (GGs) in binary and mul-ticlass classification problems. For Chipclass, a hyperparameter-less and optimization-less GG-based binary classifier, we discuss how activation functions and support edge (SE)-centered neurons affect the classification, proposing smoother functions and structural SV (SSV)-centered neurons to achieve margins with low probabilities and smoother classification contours We extend the neural network architecture, which can be trained with backpropagation with a softmax function and a cross-entropy loss, or by solving a system of linear equations.
View Article and Find Full Text PDFThe number of connected embedded edge computing Internet of Things (IoT) devices has been increasing over the years, contributing to the significant growth of available data in different scenarios. Thereby, machine learning algorithms arise to enable task automation and process optimization based on those data. However, due to some learning methods' computational complexity implementing geometric classifiers, it is a challenge to map these on embedded systems or devices with limited resources in size, processing, memory, and power, to accomplish the desired requirements.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
March 2021
This brief presents a geometrical approach for obtaining large margin classifiers. The method aims at exploring the geometrical properties of the data set from the structure of a Gabriel graph, which represents pattern relations according to a given distance metric, such as the Euclidean distance. Once the graph is generated, geometrical support vectors (SVs) (analogous to support vector machines (SVMs) SVs) are obtained in order to yield the final large margin solution from a Gaussian mixture model.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
November 2020
This article presents a novel representation of artificial neural networks (ANNs) that is based on a projection of weights into a new spherical space defined by a radius r and a vector of angles Θ . This spherical representation of ANNs further simplifies the multiobjective learning problem, which is usually treated as a constrained optimization problem that requires great computational effort to maintain the constraints. With the proposed spherical representation, the constrained optimization problem becomes unconstrained, which simplifies the formulation and computational effort required.
View Article and Find Full Text PDFAcute leukemia classification into its myeloid and lymphoblastic subtypes is usually accomplished according to the morphology of the tumor. Nevertheless, the subtypes may have similar histopathological appearance, making screening procedures difficult. In addition, approximately one-third of acute myeloid leukemias are characterized by aberrant cytoplasmic localization of nucleophosmin (NPMc(+)), where the majority has a normal karyotype.
View Article and Find Full Text PDFBackground: Filter feature selection methods compute molecular signatures by selecting subsets of genes in the ranking of a valuation function. The motivations of the valuation functions choice are almost always clearly stated, but those for selecting the genes according to their ranking are hardly ever explicit.
Method: We addressed the computation of molecular signatures by searching the optima of a bi-objective function whose solution space was the set of all possible molecular signatures, ie, the set of subsets of genes.
IEEE Trans Neural Netw Learn Syst
June 2013
Traditional learning algorithms applied to complex and highly imbalanced training sets may not give satisfactory results when distinguishing between examples of the classes. The tendency is to yield classification models that are biased towards the overrepresented (majority) class. This paper investigates this class imbalance problem in the context of multilayer perceptron (MLP) neural networks.
View Article and Find Full Text PDFThe Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network.
View Article and Find Full Text PDFIEEE/ACM Trans Comput Biol Bioinform
April 2012
A large number of unclassified sequences is still found in public databases, which suggests that there is still need for new investigations in the area. In this contribution, we present a methodology based on Artificial Neural Networks for protein functional classification. A new protein coding scheme, called here Extended-Sequence Coding by Sliding Windows, is presented with the goal of overcoming some of the difficulties of the well method Sequence Coding by Sliding Window.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
April 2011
In this paper we propose an application of local statistical models to the problem of identifying patients with pathologic complete response (PCR) to neoadjuvant chemotherapy. The idea of using local models is to split the input space (with data from PCR and NoPCR patients) and build a model for each partition. After the construction of the models we used bayesian classifiers and logistic regression to classify patients in the two classes.
View Article and Find Full Text PDFInspired by the theory of neuronal group selection (TNGS), we have carried out an analysis of the capacity of convergence of a multi-level associative memory based on coupled generalized-brain-state-in-a-box (GBSB) networks through evolutionary computation. The TNGS establishes that a memory process can be described as being organized functionally in hierarchical levels where higher levels coordinate sets of functions of lower levels. According to this theory, the most basic units in the cortical area of the brain are called neuronal groups or first-level blocks of memories and the higher-level memories are formed through selective strengthening or weakening of the synapses amongst the neuronal groups.
View Article and Find Full Text PDF