A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2014.2309939DOI Listing

Publication Analysis

Top Keywords

multitask multiple
8
multiple kernel
8
kernel learning
8
average objective
8
objective functions
8
task objectives
8
pareto-path multitask
4
learning traditional
4
traditional intuitively
4
intuitively appealing
4

Similar Publications

Multimodal sentiment analysis (MSA) aims to use a variety of sensors to obtain and process information to predict the intensity and polarity of human emotions. The main challenges faced by current multi-modal sentiment analysis include: how the model extracts emotional information in a single modality and realizes the complementary transmission of multimodal information; how to output relatively stable predictions even when the sentiment embodied in a single modality is inconsistent with the multi-modal label; how can the model ensure high accuracy when a single modal information is incomplete or the feature extraction performance not good. Traditional methods do not take into account the interaction of unimodal contextual information and multi-modal information.

View Article and Find Full Text PDF

Neural reuse can drive organisms to generalize knowledge across various tasks during learning. However, existing devices mostly focus on architectures rather than network functions, lacking the mimic capabilities of neural reuse. Here, we demonstrate a rational device designed based on ferroionic CuInPS, to accomplish the neural reuse function, enabled by dynamic allocation of the ferro-ionic phase.

View Article and Find Full Text PDF

Background: Repeat neurological assessment is standard in cases of severe acute brain injury. However, conventional measures rely on overt behavior. Unfortunately, behavioral responses may be difficult or impossible for some patients.

View Article and Find Full Text PDF

Due to its non-contact characteristics, remote photoplethysmography (rPPG) has attracted widespread attention in recent years, and has been widely applied for remote physiological measurements. However, most of the existing rPPG models are unable to estimate multiple physiological signals simultaneously, and the performance of the limited available multi-task models is also restricted due to their single-model architectures. To address the above problems, this study proposes MultiPhys, adopting a heterogeneous network fusion approach for its development.

View Article and Find Full Text PDF

Synergistic learning with multi-task DeepONet for efficient PDE problem solving.

Neural Netw

January 2025

School of Engineering, Brown University, United States of America; Division of Applied Mathematics, Brown University, United States of America. Electronic address:

Multi-task learning (MTL) is an inductive transfer mechanism designed to leverage useful information from multiple tasks to improve generalization performance compared to single-task learning. It has been extensively explored in traditional machine learning to address issues such as data sparsity and overfitting in neural networks. In this work, we apply MTL to problems in science and engineering governed by partial differential equations (PDEs).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!