Publications by authors named "David L Donoho"

Although the concepts of nonuniform sampling (NUS​​​​​​​) and non-Fourier spectral reconstruction in multidimensional NMR began to emerge 4 decades ago , it is only relatively recently that NUS has become more commonplace. Advantages of NUS include the ability to tailor experiments to reduce data collection time and to improve spectral quality, whether through detection of closely spaced peaks (i.e.

View Article and Find Full Text PDF

Modern practice for training classification deepnets involves a terminal phase of training (TPT), which begins at the epoch where training error first vanishes. During TPT, the training error stays effectively zero, while training loss is pushed toward zero. Direct measurements of TPT, for three prototypical deepnet architectures and across seven canonical classification datasets, expose a pervasive inductive bias we call (NC), involving four deeply interconnected phenomena.

View Article and Find Full Text PDF

We show that in a common high-dimensional covariance model, the choice of loss function has a profound effect on optimal estimation. In an asymptotic framework based on the Spiked Covariance model and use of orthogonally invariant estimators, we show that optimal estimation of the population covariance matrix boils down to design of an optimal shrinker that acts elementwise on the sample eigenvalues. Indeed, to each loss function there corresponds a unique admissible eigenvalue shrinker * dominating all other shrinkers.

View Article and Find Full Text PDF

Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values.

View Article and Find Full Text PDF

In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements Y = Ax(0). For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (k/n,n/N)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property--with the same phase transition location--holds for a wide range of non-Gaussian random matrix ensembles.

View Article and Find Full Text PDF

Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications.

View Article and Find Full Text PDF

We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform.

View Article and Find Full Text PDF

We present in this paper a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the Multiscale Retinex.

View Article and Find Full Text PDF

In a recent paper, a method called morphological component analysis (MCA) has been proposed to separate the texture from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold which decreases linearly towards zero along the iterations. This paper shows how the MCA convergence can be drastically improved using the mutual incoherence of the dictionaries associated to the different components.

View Article and Find Full Text PDF

Iterative thresholding algorithms have a long history of application to signal processing. Although they are intuitive and easy to implement, their development was heuristic and mainly ad hoc. Using a special form of the thresholding operation, called soft thresholding, we show that the fixed point of iterative thresholding is equivalent to minimum l(1)-norm reconstruction.

View Article and Find Full Text PDF

Background: We applied the Virtual Northern technique to human brain mRNA to systematically measure human mRNA transcript lengths on a genome-wide scale.

Methodology/principal Findings: We used separation by gel electrophoresis followed by hybridization to cDNA microarrays to measure 8,774 mRNA transcript lengths representing at least 6,238 genes at high (>90%) confidence. By comparing these transcript lengths to the Refseq and H-Invitational full-length cDNA databases, we found that nearly half of our measurements appeared to represent novel transcript variants.

View Article and Find Full Text PDF

We describe a method for recovering the underlying parametrization of scattered data (m(i)) lying on a manifold M embedded in high-dimensional Euclidean space. The method, Hessian-based locally linear embedding, derives from a conceptual framework of local isometry in which the manifold M, viewed as a Riemannian submanifold of the ambient Euclidean Space R(n), is locally isometric to an open, connected subset Θ of Euclidean space R(d). Because Θ does not have to be convex, this framework is able to handle a significantly wider class of situations than the original ISOMAP algorithm.

View Article and Find Full Text PDF

Given a dictionary D = {d(k)} of vectors d(k), we seek to represent a signal S as a linear combination S = summation operator(k) gamma(k)d(k), with scalar coefficients gamma(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process.

View Article and Find Full Text PDF

The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms.

View Article and Find Full Text PDF

Consider an underdetermined system of linear equations y = Ax with known y and d x n matrix A. We seek the nonnegative x with the fewest nonzeros satisfying y = Ax. In general, this problem is NP-hard.

View Article and Find Full Text PDF

Let A be a d x n matrix and T = T(n-1) be the standard simplex in Rn. Suppose that d and n are both large and comparable: d approximately deltan, delta in (0, 1). We count the faces of the projected simplex AT when the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of Rn.

View Article and Find Full Text PDF

A new class of related algorithms for deblocking block-transform compressed images and video sequences is proposed in this paper. The algorithms apply weighted sums on pixel quartets, which are symmetrically aligned with respect to block boundaries. The basic weights, which are aimed at very low bit-rate images, are obtained from a two-dimensional function which obeys predefined constraints.

View Article and Find Full Text PDF