We develop a general framework for state estimation in systems modeled with noise-polluted continuous time dynamics and discrete time noisy measurements. Our approach is based on maximum likelihood estimation and employs the calculus of variations to derive optimality conditions for continuous time functions. We make no prior assumptions on the form of the mapping from measurements to state-estimate or on the distributions of the noise terms, making the framework more general than Kalman filtering/smoothing where this mapping is assumed to be linear and the noises Gaussian.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
May 2022
Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work, the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained.
View Article and Find Full Text PDF