Recent advances in digital signage technology have improved the ability to visually select specific items within a group. Although this is due to the ability to dynamically update the display of items, the corresponding layout schemes remain a subject of research. This paper explores the sophisticated layout of items by respecting the underlying context of searching for favorite items.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
May 2015
Diffuse optical tomography (DOT) reconstructs 3-D tomographic images of brain activities from observations by near-infrared spectroscopy (NIRS) that is formulated as an ill-posed inverse problem. This brief presents a method for NIRS DOT based on a hierarchical Bayesian approach introducing the automatic relevance determination prior and the variational Bayes technique. Although the sparseness of the estimation strongly depends on the hyperparameters, in general, our method has less dependency on the hyperparameters.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2015
Network data show the relationship among one kind of objects, such as social networks and hyperlinks on the Web. Many statistical models have been proposed for analyzing these data. For modeling cluster structures of networks, the infinite relational model (IRM) was proposed as a Bayesian nonparametric extension of the stochastic block model.
View Article and Find Full Text PDFThe local variational method is a technique to approximate an intractable posterior distribution in Bayesian learning. This article formulates a general framework for local variational approximation and shows that its objective function is decomposable into the sum of the Kullback information and the expected Bregman divergence from the approximating posterior distribution to the Bayesian posterior distribution. Based on a geometrical argument in the space of approximating posteriors, we propose an efficient method to evaluate an upper bound of the marginal likelihood.
View Article and Find Full Text PDFIEEE Trans Neural Netw
November 2009
Exponential principal component analysis (e-PCA) has been proposed to reduce the dimension of the parameters of probability distributions using Kullback information as a distance between two distributions. It also provides a framework for dealing with various data types such as binary and integer for which the Gaussian assumption on the data distribution is inappropriate. In this paper, we introduce a latent variable model for the e-PCA.
View Article and Find Full Text PDFIn this paper, we focus on variational Bayesian learning of general mixture models. Variational Bayesian learning was proposed as an approximation of Bayesian learning. While it has provided computational tractability and good generalization in many applications, little has been done to investigate its theoretical properties.
View Article and Find Full Text PDF