A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1057
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3175
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Sparsity-Aware Distributed Learning for Gaussian Processes With Linear Multiple Kernel. | LitMetric

Gaussian processes (GPs) stand as crucial tools in machine learning and signal processing, with their effectiveness hinging on kernel design and hyperparameter optimization. This article presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework to optimize the hyperparameters. The newly proposed grid spectral mixture product (GSMP) kernel is tailored for multidimensional data, effectively reducing the number of hyperparameters while maintaining good approximation capability. We further demonstrate that the associated hyperparameter optimization of this kernel yields sparse solutions. To exploit the inherent sparsity of the solutions, we introduce the sparse linear multiple kernel learning (SLIM-KL) framework. The framework incorporates a quantized alternating direction method of multipliers (ADMMs) scheme for collaborative learning among multiple agents, where the local optimization problem is solved using a distributed successive convex approximation (DSCA) algorithm. SLIM-KL effectively manages large-scale hyperparameter optimization for the proposed kernel, simultaneously ensuring data privacy and minimizing communication costs. The theoretical analysis establishes convergence guarantees for the learning framework, while experiments on diverse datasets demonstrate the superior prediction performance and efficiency of our proposed methods.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2025.3531784DOI Listing

Publication Analysis

Top Keywords

linear multiple
12
multiple kernel
12
hyperparameter optimization
12
sparsity-aware distributed
8
distributed learning
8
gaussian processes
8
learning framework
8
kernel
7
learning
6
learning gaussian
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!