DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks. We decompose a tensor as the product of low-rank tensor factors (e.g., a matrix as the outer product of two vectors), where each low-rank tensor is generated by a deep network (DN) that is trained in a self-supervised manner to minimize the mean-square approximation error. Our key observation is that the implicit regularization inherent in DNs enables them to capture nonlinear signal structures (e.g., manifolds) that are out of the reach of classical linear methods like the singular value decomposition (SVD) and principal components analysis (PCA). Furthermore, in contrast to the SVD and PCA, whose performance deteriorates when the tensor's entries deviate from additive white Gaussian noise, we demonstrate that the performance of DeepTensor is robust to a wide range of distributions. We validate that DeepTensor is a robust and computationally efficient drop-in replacement for the SVD, PCA, nonnegative matrix factorization (NMF), and similar decompositions by exploring a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification. In particular, DeepTensor offers a 6 dB signal-to-noise ratio improvement over standard denoising methods for signal corrupted by Poisson noise and learns to decompose 3D tensors 60 times faster than a single DN equipped with 3D convolutions.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3450575DOI Listing

Publication Analysis

Top Keywords

low-rank tensor
12
deep network
8
computationally efficient
8
svd pca
8
deeptensor robust
8
deeptensor
5
deeptensor low-rank
4
tensor
4
tensor decomposition
4
decomposition deep
4

Similar Publications

A multimodal brain age estimation model could provide enhanced insights into brain aging. However, effectively integrating multimodal neuroimaging data to enhance the accuracy of brain age estimation remains a challenging task. In this study, we developed an innovative data fusion technique employing a low-rank tensor fusion algorithm, tailored specifically for deep learning-based frameworks aimed at brain age estimation.

View Article and Find Full Text PDF

Multi-way overlapping clustering by Bayesian tensor decomposition.

Stat Interface

February 2024

Department of Statistics, Texas A&M University, College Station TX 77843, USA.

The development of modern sequencing technologies provides great opportunities to measure gene expression of multiple tissues from different individuals. The three-way variation across genes, tissues, and individuals makes statistical inference a challenging task. In this paper, we propose a Bayesian multi-way clustering approach to cluster genes, tissues, and individuals simultaneously.

View Article and Find Full Text PDF

The multi-modal knowledge graph completion (MMKGC) task aims to automatically mine the missing factual knowledge from the existing multi-modal knowledge graphs (MMKGs), which is crucial in advancing cross-modal learning and reasoning. However, few methods consider the adverse effects caused by different missing modal information in the model learning process. To address the above challenges, we innovatively propose a odal quilibrium elational raph framwork, called .

View Article and Find Full Text PDF

A Tensor Space for Multi-View and Multitask Learning Based on Einstein and Hadamard Products: A Case Study on Vehicle Traffic Surveillance Systems.

Sensors (Basel)

November 2024

Center for Research and Advanced Studies of the National Polytechnic Institute, Department of Electrical Engineering and Computer Sciences, Telecommunications Section, Av. del Bosque 1145, El Bajio, Zapopan 45019, Jalisco, Mexico.

Since multi-view learning leverages complementary information from multiple feature sets to improve model performance, a tensor-based data fusion layer for neural networks, called Multi-View Data Tensor Fusion (MV-DTF), is used. It fuses M feature spaces X1,⋯,XM, referred to as views, in a new latent tensor space, S, of order and dimension J1×⋯×JP, defined in the space of affine mappings composed of a multilinear map T:X1×⋯×XM→S-represented as the Einstein product between a (P+M)-order tensor A anda rank-one tensor, X=x(1)⊗⋯⊗x(M), where x(m)∈Xm is the -th view-and a translation. Unfortunately, as the number of views increases, the number of parameters that determine the MV-DTF layer grows exponentially, and consequently, so does its computational complexity.

View Article and Find Full Text PDF

Single-cell multi-omics refers to the various types of biological data at the single-cell level. These data have enabled insight and resolution to cellular phenotypes, biological processes, and developmental stages. Current advances hold high potential for breakthroughs by integrating multiple different omics layers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!