AI Article Synopsis

  • The text discusses the limitations of traditional sparsity-constrained algorithms like OMP and FW, which can be computationally heavy due to the need to calculate gradient components for selecting non-zero variables.
  • The authors propose innovations that focus on estimating just the top entry of the gradient using greedy methods and randomization techniques, thereby reducing computational costs.
  • They introduce a bandit-based algorithm for efficiently identifying the best gradient entry and demonstrate through experiments that their methods can significantly speed up existing algorithms while maintaining similar performance to exact computations.

Article Abstract

Several sparsity-constrained algorithms, such as orthogonal matching pursuit (OMP) or the Frank-Wolfe (FW) algorithm, with sparsity constraints work by iteratively selecting a novel atom to add to the current nonzero set of variables. This selection step is usually performed by computing the gradient and then by looking for the gradient component with maximal absolute entry. This step can be computationally expensive especially for large-scale and high-dimensional data. In this paper, we aim at accelerating these sparsity-constrained optimization algorithms by exploiting the key observation that, for these algorithms to work, one only needs the coordinate of the gradient's top entry. Hence, we introduce algorithms based on greedy methods and randomization approaches that aim at cheaply estimating the gradient and its top entry. Another of our contribution is to cast the problem of finding the best gradient entry as a best-arm identification in a multiarmed bandit problem. Owing to this novel insight, we are able to provide a bandit-based algorithm that directly estimates the top entry in a very efficient way. Theoretical observations stating that the resulting inexact FW or OMP algorithms act, with high probability, similar to their exact versions are also given. We have carried out several experiments showing that the greedy deterministic and the bandit approaches we propose can achieve an acceleration of an order of magnitude while being as efficient as the exact gradient when used in algorithms, such as OMP, FW, or CoSaMP.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2016.2600243DOI Listing

Publication Analysis

Top Keywords

top entry
12
greedy methods
8
methods randomization
8
randomization approaches
8
sparsity-constrained optimization
8
algorithms
7
gradient
5
entry
5
approaches multiarm
4
multiarm bandit
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!