Dictionary learning algorithms for sparse representation.

Neural Comput

Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, California 92093-0407, USA.

Published: February 2003

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2944020PMC
http://dx.doi.org/10.1162/089976603762552951DOI Listing

Publication Analysis

Top Keywords

sparse representations
12
dictionary
9
overcomplete dictionaries
8
dictionary sparse
8
natural images
8
complete dictionaries
8
sparse
5
dictionary learning
4
algorithms
4
learning algorithms
4

Similar Publications

Systems biology tackles the challenge of understanding the high complexity in the internal regulation of homeostasis in the human body through mathematical modelling. These models can aid in the discovery of disease mechanisms and potential drug targets. However, on one hand the development and validation of knowledge-based mechanistic models is time-consuming and does not scale well with increasing features in medical data.

View Article and Find Full Text PDF

Mode-informed complex-valued neural processes for matched field processing.

J Acoust Soc Am

January 2025

School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, People's Republic of China.

A complex-valued neural process method, combined with modal depth functions (MDFs) of the ocean waveguide, is proposed to reconstruct the acoustic field. Neural networks are used to describe complex Gaussian processes, modeling the distribution of the acoustic field at different depths. The network parameters are optimized through a meta-learning strategy, preventing overfitting under small sample conditions (sample size equals the number of array elements) and mitigating the slow reconstruction speed of Gaussian processes (GPs), while denoising and interpolating sparsely distributed acoustic field data, generating dense field data for virtual receiver arrays.

View Article and Find Full Text PDF

Contrastive independent subspace analysis network for multi-view spatial information extraction.

Neural Netw

January 2025

College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, Guangdong, China.

Multi-view classification integrates features from different views to optimize classification performance. Most of the existing works typically utilize semantic information to achieve view fusion but neglect the spatial information of data itself, which accommodates data representation with correlation information and is proven to be an essential aspect. Thus robust independent subspace analysis network, optimized by sparse and soft orthogonal optimization, is first proposed to extract the latent spatial information of multi-view data with subspace bases.

View Article and Find Full Text PDF

Brain imaging genetics aims to explore the association between genetic factors such as single nucleotide polymorphisms (SNPs) and brain imaging quantitative traits (QTs). However, most existing methods do not consider the nonlinear correlations between genotypic and phenotypic data, as well as potential higher-order relationships among subjects when identifying bi-multivariate associations. In this paper, a novel method called deep hyper-Laplacian regularized self-representation learning based structured association analysis (DHRSAA) is proposed which can learn genotype-phenotype associations and obtain relevant biomarkers.

View Article and Find Full Text PDF

Optimal sparsity in autoencoder memory models of the hippocampus.

bioRxiv

January 2025

Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY.

Storing complex correlated memories is significantly more efficient when memories are recoded to obtain compressed representations. Previous work has shown that compression can be implemented in a simple neural circuit, which can be described as a sparse autoencoder. The activity of the encoding units in these models recapitulates the activity of hippocampal neurons recorded in multiple experiments.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!