A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Dynamic Perturbation of Weights for Improved Data Reconstruction in Unsupervised Learning. | LitMetric

Dynamic Perturbation of Weights for Improved Data Reconstruction in Unsupervised Learning.

Proc Int Jt Conf Neural Netw

Dept. of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA, USA.

Published: July 2021

The concept of weight pruning has shown success in neural network model compression with marginal loss in classification performance. However, similar concepts have not been well recognized in improving unsupervised learning. To the best of our knowledge, this paper proposes one of the first studies on weight pruning in unsupervised autoencoder models using non-imaging data points. We adapt the weight pruning concept to investigate the dynamic behavior of weights while reconstructing data using an autoencoder and propose a deterministic model perturbation algorithm based on the weight statistics. The model perturbation at periodic intervals resets a percentage of weight values using a binary weight mask. Experiments across eight non-imaging data sets ranging from gene sequence to swarm behavior data show that only a few periodic perturbations of weights improve the data reconstruction accuracy of autoencoders and additionally introduce model compression. All data sets yield a small portion of (<5%) weights that are substantially higher than the mean weight value. These weights are found to be much more informative than a substantial portion (>90%) of the weights with negative values. In general, the perturbation of low or negative weight values at periodic intervals has improved the data reconstruction loss for most data sets when compared to the case without perturbation. The proposed approach may help explain and correct the dynamic behavior of neural network models in a deterministic way for data reconstruction and obtaining a more accurate representation of latent variables using autoencoders.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9493331PMC
http://dx.doi.org/10.1109/ijcnn52387.2021.9533539DOI Listing

Publication Analysis

Top Keywords

data reconstruction
16
weight pruning
12
data sets
12
data
10
improved data
8
unsupervised learning
8
neural network
8
model compression
8
non-imaging data
8
dynamic behavior
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!