The rapid increases of the global population and climate change pose major challenges to a sustainable production of food to meet consumer demands. Process-based models (PBMs) have long been used in agricultural crop production for predicting yield and understanding the environmental regulation of plant physiological processes and its consequences for crop growth and development. In recent years, with the increasing use of sensor and communication technologies for data acquisition in agriculture, machine learning (ML) has become a popular tool in yield prediction (especially on a large scale) and phenotyping.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
July 2021
The ability to learn more concepts from incrementally arriving data over time is essential for the development of a lifelong learning system. However, deep neural networks often suffer from forgetting previously learned concepts when continually learning new concepts, which is known as the catastrophic forgetting problem. The main reason for catastrophic forgetting is that past concept data are not available, and neural weights are changed during incrementally learning new concepts.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
January 2022
In this work, we introduce the average top- k ( AT) loss, which is the average over the k largest individual losses over a training data, as a new aggregate loss for supervised learning. We show that the AT loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss. Yet, the AT loss can better adapt to different data distributions because of the extra flexibility provided by the different choices of k.
View Article and Find Full Text PDFThis review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrating recent progress. First, we review criteria for determining the parameter structure of models from the literature.
View Article and Find Full Text PDFIEEE Trans Image Process
January 2017
This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition.
View Article and Find Full Text PDFThe correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM).
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
February 2014
In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
January 2013
This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decomposes the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected.
View Article and Find Full Text PDFIEEE Trans Neural Netw
December 2011
This paper is reports an extension of our previous investigations on adding transparency to neural networks. We focus on a class of linear priors (LPs), such as symmetry, ranking list, boundary, monotonicity, etc., which represent either linear-equality or linear-inequality priors.
View Article and Find Full Text PDFIEEE Trans Image Process
June 2011
Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
August 2011
In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration.
View Article and Find Full Text PDFIEEE Trans Neural Netw
April 2009
This brief presents a two-phase construction approach for pruning both input and hidden units of multilayer perceptrons (MLPs) based on mutual information (MI). First, all features of input vectors are ranked according to their relevance to target outputs through a forward strategy. The salient input units of an MLP are thus determined according to the order of the ranking result and by considering their contributions to the network's performance.
View Article and Find Full Text PDF