Learning in compressed space.

Neural Netw

University of Bremen, Fachbereich 3 - Mathematik und Informatik, Postfach 330 440, 28334 Bremen, Germany.

Published: June 2013

We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2013.01.020DOI Listing

Publication Analysis

Top Keywords

learning compressed
4
compressed space
4
space examine
4
examine methods
4
methods deal
4
deal complex
4
complex machine
4
machine learning
4
learning problems
4
problems compressed
4

Similar Publications

Compressed ultrafast photography (CUP) is a high-speed imaging technique with a frame rate of up to ten trillion frames per second (fps) and a sequence depth of hundreds of frames. This technique is a powerful tool for investigating ultrafast processes. However, since the reconstruction process is an ill-posed problem, the image reconstruction will be more difficult with the increase of the number of reconstruction frames and the number of pixels of each reconstruction frame.

View Article and Find Full Text PDF

AI-Assisted Compressed Sensing Enables Faster Brain MRI for the Elderly: Image Quality and Diagnostic Equivalence with Conventional Imaging.

Int J Gen Med

January 2025

School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, People's Republic of China.

Purpose: Conventional brain MRI protocols are time-consuming, which can lead to patient discomfort and inefficiency in clinical settings. This study aims to assess the feasibility of using artificial intelligence-assisted compressed sensing (ACS) to reduce brain MRI scan time while maintaining image quality and diagnostic accuracy compared to a conventional imaging protocol.

Patients And Methods: Seventy patients from the department of neurology underwent brain MRI scans using both conventional and ACS protocols, including axial and sagittal T2-weighted fast spin-echo sequences and T2-fluid attenuated inversion recovery (FLAIR) sequence.

View Article and Find Full Text PDF

Contrastive learning with transformer for adverse endpoint prediction in patients on DAPT post-coronary stent implantation.

Front Cardiovasc Med

January 2025

Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL, United States.

Background: Effective management of dual antiplatelet therapy (DAPT) following drug-eluting stent (DES) implantation is crucial for preventing adverse events. Traditional prognostic tools, such as rule-based methods or Cox regression, despite their widespread use and ease, tend to yield moderate predictive accuracy within predetermined timeframes. This study introduces a new contrastive learning-based approach to enhance prediction efficacy over multiple time intervals.

View Article and Find Full Text PDF

Spectral analysis is a widely used method for monitoring photosynthetic capacity. However, vegetation indices-based linear regression exhibits insufficient utilization of spectral information, while full spectra-based traditional machine learning has limited representational capacity (partial least squares regression) or uninterpretable (convolution). In this study, we proposed a deep learning model with enhanced interpretability based on attention and vegetation indices calculation for global spectral feature mining to accurately estimate photosynthetic capacity.

View Article and Find Full Text PDF

Deep learning sequence models trained on personalized genomics can improve variant effect prediction, however, applications of these models are limited by computational requirements for storing and reading large datasets. We address this with GenVarLoader, which stores personalized genomic data in new memory-mapped formats with optimal data locality to achieve ∼1,000x faster throughput and ∼2,000x better compression compared to existing alternatives.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!