We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2015.2414879 | DOI Listing |
Commun Stat Theory Methods
April 2023
The University of Chicago Booth School of Business, Chicago, IL.
Two new nonconvex penalty functions - Laplace and arctan - were recently introduced in the literature to obtain sparse models for high-dimensional statistical problems. In this paper, we study the theoretical properties of Laplace and arctan penalized ordinary least squares linear regression models. We first illustrate the near-unbiasedness of the nonzero regression weights obtained by the new penalty functions, in the orthonormal design case.
View Article and Find Full Text PDFJ Chem Phys
June 2023
Center for Computational Materials, Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, Texas 78712, USA.
We report a Kohn-Sham density functional theory calculation of a system with more than 200 000 atoms and 800 000 electrons using a real-space high-order finite-difference method to investigate the electronic structure of large spherical silicon nanoclusters. Our system of choice was a 20 nm large spherical nanocluster with 202 617 silicon atoms and 13 836 hydrogen atoms used to passivate the dangling surface bonds. To speed up the convergence of the eigenspace, we utilized Chebyshev-filtered subspace iteration, and for sparse matrix-vector multiplications, we used blockwise Hilbert space-filling curves, implemented in the PARSEC code.
View Article and Find Full Text PDFChaos
June 2020
Department of Mathematics, Imperial College London, London SW7 2AZ, United Kingdom.
We introduce new features of data-adaptive harmonic decomposition (DAHD) that are showcased to characterize spatiotemporal variability in high-dimensional datasets of complex and mutsicale oceanic flows, offering new perspectives and novel insights. First, we present a didactic example with synthetic data for identification of coherent oceanic waves embedded in high amplitude noise. Then, DAHD is applied to analyze turbulent oceanic flows simulated by the Regional Oceanic Modeling System and an eddy-resolving three-layer quasigeostrophic ocean model, where resulting spectra exhibit a thin line capturing nearly all the energy at a given temporal frequency and showing well-defined scaling behavior across frequencies.
View Article and Find Full Text PDFSensors (Basel)
May 2020
State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Beijing 100049, China.
Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the ℓ m -norm ( m ≥ 3 , m ∈ N ) maximization has been proposed to solve SDL, which reshapes the problem to an optimization problem with orthogonality constraints. In this paper, we first propose an ℓ m -norm maximization model for solving dual principal component pursuit (DPCP) based on the similarities between DPCP and SDL.
View Article and Find Full Text PDFNeural Netw
August 2020
Department of Mathematics, City University of Hong Kong, Hong Kong. Electronic address:
Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces a Haar basis, which is a sparse and localized orthonormal system for a coarse-grained chain on the graph.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!