The text discusses the limitations of traditional sparsity-constrained algorithms like OMP and FW, which can be computationally heavy due to the need to calculate gradient components for selecting non-zero variables.
The authors propose innovations that focus on estimating just the top entry of the gradient using greedy methods and randomization techniques, thereby reducing computational costs.
They introduce a bandit-based algorithm for efficiently identifying the best gradient entry and demonstrate through experiments that their methods can significantly speed up existing algorithms while maintaining similar performance to exact computations.