Neural Comput
Okinawa Institute of Science and Technology, Okinawa, Japan 904-0495
Published: November 2019
This study introduces PV-RNN, a novel variational RNN inspired by predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how latent variables can learn meaningful representations and how the inference model can transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation-rather than external inputs during the forward computation-are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. As in other variational Bayes RNNs, our model learns by maximizing a lower bound on the marginal likelihood of the sequential data, which is composed of two terms: the negative of the expectation of prediction errors and the negative of the Kullback-Leibler divergence between the prior and the approximate posterior distributions. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on those two terms. We test the model on two data sets with probabilistic structures and show that with high values of the meta-prior, the network develops deterministic chaos through which the randomness of the data is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows us to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1162/neco_a_01228 | DOI Listing |
Annu Int Conf IEEE Eng Med Biol Soc
July 2024
Stethoscope screening serves as a primary method for diagnosing pulmonary infections, with medical professionals actively listening for signs of pathologies in breathing sounds like wheezing and crackling, which carry different clinical interpretations. Environmental conditions during auscultation recordings often share similarities with these abnormal lung sounds, and can mask or confound their presence making their detection highly sensitive to surrounding factors. To automate this process, a robust anomaly detection scheme with resilience to ambient backgrounds and high precision is essential.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2024
Uncertainty quantification is crucial in modeling critical care systems, where external factors such as clinical disturbances significantly impact decision-making. This study employs Bayesian variational autoencoders (BVAEs) to quantify inherent randomness in clinical data (aleatoric uncertainty) and detect uncertainty in the biases and weights of the neural network model (epistemic uncertainty). Focusing on fluid therapy, the proposed BVAE models aim to detect hemorrhage incidents through out-of-distribution (OoD) data detection.
View Article and Find Full Text PDFEntropy (Basel)
February 2025
School of Mathematics, University of Bristol, Fry Building, Woodland Road, Bristol BS8 1UG, UK.
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling the weights of a neural network as the hidden states of a Hidden Markov model, with the observed process defined by the available data. A filtering algorithm is employed to learn a variational approximation of the evolving-in-time posterior distribution over the weights.
View Article and Find Full Text PDFCell Rep Methods
February 2025
Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA. Electronic address:
Probabilistic graphical models are powerful tools to quantify, visualize, and interpret network dependencies in complex biological systems such as high-throughput -omics. However, many graphical models assume sample homogeneity, limiting their effectiveness. We propose a flexible Bayesian approach called graphical regression (GraphR), which (1) incorporates sample heterogeneity at different scales through a regression-based formulation, (2) enables sparse sample-specific network estimation, (3) identifies and quantifies potential effects of heterogeneity on network structures, and (4) achieves computational efficiency via variational Bayes algorithms.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The Faculty of Engineering in Foreign Languages, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania.
In geriatric healthcare, missing data pose significant challenges, especially in systems used for frailty monitoring in elderly individuals. This study explores advanced imputation techniques used to enhance data quality and maintain model performance in a system designed to detect frailty insights. We introduce missing data mechanisms-Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR)-into a dataset collected from smart bracelets, simulating real-world conditions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!
© LitMetric 2025. All rights reserved.