A Monte Carlo EM approach for partially observable diffusion processes: theory and applications to neural networks.

Neural Comput

Machine Perception Laboratory, Institute for Neural Computation, University of California, San Diego, La Jolla, CA 92093, USA.

Published: July 2002

We present a Monte Carlo approach for training partially observable diffusion processes. We apply the approach to diffusion networks, a stochastic version of continuous recurrent neural networks. The approach is aimed at learning probability distributions of continuous paths, not just expected values. Interestingly, the relevant activation statistics used by the learning rule presented here are inner products in the Hilbert space of square integrable functions. These inner products can be computed using Hebbian operations and do not require backpropagation of error signals. Moreover, standard kernel methods could potentially be applied to compute such inner products. We propose that the main reason that recurrent neural networks have not worked well in engineering applications (e.g., speech recognition) is that they implicitly rely on a very simplistic likelihood model. The diffusion network approach proposed here is much richer and may open new avenues for applications of recurrent neural networks. We present some analysis and simulations to support this view. Very encouraging results were obtained on a visual speech recognition task in which neural networks outperformed hidden Markov models.

Download full-text PDF

Source
http://dx.doi.org/10.1162/08997660260028593DOI Listing

Publication Analysis

Top Keywords

neural networks
20
recurrent neural
12
inner products
12
monte carlo
8
carlo approach
8
partially observable
8
observable diffusion
8
diffusion processes
8
speech recognition
8
networks
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!