Annu Int Conf IEEE Eng Med Biol Soc
November 2021
This paper proposes a new generative probabilistic model for phonocardiograms (PCGs) that can simultaneously capture oscillatory factors and state transitions in cardiac cycles. Conventionally, PCGs have been modeled in two main aspects. One is a state space model that represents recurrent and frequently appearing state transitions.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2023
Gaussian process regression (GPR) is a fundamental model used in machine learning (ML). Due to its accurate prediction with uncertainty and versatility in handling various data structures via kernels, GPR has been successfully used in various applications. However, in GPR, how the features of an input contribute to its prediction cannot be interpreted.
View Article and Find Full Text PDFLangevin dynamics (LD) has been extensively studied theoretically and practically as a basic sampling technique. Recently, the incorporation of non-reversible dynamics into LD is attracting attention because it accelerates the mixing speed of LD. Popular choices for non-reversible dynamics include underdamped Langevin dynamics (ULD), which uses second-order dynamics and perturbations with skew-symmetric matrices.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
March 2016
We propose a method for unsupervised many-to-many object matching from multiple networks, which is the task of finding correspondences between groups of nodes in different networks. For example, the proposed method can discover shared word groups from multi-lingual document-word networks without cross-language alignment information. We assume that multiple networks share groups, and each group has its own interaction pattern with other groups.
View Article and Find Full Text PDFWe propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings.
View Article and Find Full Text PDF