Negative pressure wound therapy( NPWT) is used primarily for tissue defects. In recent years, cardiovascular surgery via full sternotomy is increasingly performed through small incisions, but the rate of cardiovascular surgery through median sternotomy remains high in elderly patients, who frequently have complicated cardiovascular diseases. Mediastinitis, among other surgical site infections( SSIs), is a serious complication after cardiovascular surgery that must be resolved.
View Article and Find Full Text PDFThe patient is a 76-year-old man who underwent aortic and mitral valve replacement 30 years ago, both with mechanical valves. He had been on anticoagulant therapy with warfarin, which was switched to dabigatran two years ago by his primary care physician. He developed shortness of breath afterward and was taken to the hospital with heart failure.
View Article and Find Full Text PDFA large number of neurons form cell assemblies that process information in the brain. Recent developments in measurement technology, one of which is calcium imaging, have made it possible to study cell assemblies. In this study, we aim to extract cell assemblies from calcium imaging data.
View Article and Find Full Text PDFPlasticity is one of the most important properties of the nervous system, which enables animals to adjust their behavior to the ever-changing external environment. Changes in synaptic efficacy between neurons constitute one of the major mechanisms of plasticity. Therefore, estimation of neural connections is crucial for investigating information processing in the brain.
View Article and Find Full Text PDFElectroencephalography (EEG) is a non-invasive brain imaging technique that describes neural electrical activation with good temporal resolution. Source localization is required for clinical and functional interpretations of EEG signals, and most commonly is achieved via the dipole model; however, the number of dipoles in the brain should be determined for a reasonably accurate interpretation. In this paper, we propose a dipole source localization (DSL) method that adaptively estimates the dipole number by using a novel information criterion.
View Article and Find Full Text PDFWe propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point.
View Article and Find Full Text PDFIn a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices.
View Article and Find Full Text PDFThis study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible.
View Article and Find Full Text PDFBackground: Knowledge about the distribution, strength, and direction of synaptic connections within neuronal networks are crucial for understanding brain function. Electrophysiology using multiple electrodes provides a very high temporal resolution, but does not yield sufficient spatial information for resolving neuronal connection topology. Optical recording techniques using single-cell resolution have provided promise for providing spatial information.
View Article and Find Full Text PDFAn image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image, where correspondence between high- and low-resolution images are modeled by a certain degradation process. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method.
View Article and Find Full Text PDFNeural Comput
September 2014
Clustering is a representative of unsupervised learning and one of the important approaches in exploratory data analysis. By its very nature, clustering without strong assumption on data distribution is desirable. Information-theoretic clustering is a class of clustering methods that optimize information-theoretic quantities such as entropy and mutual information.
View Article and Find Full Text PDFA graph is a mathematical representation of a set of variables where some pairs of the variables are connected by edges. Common examples of graphs are railroads, the Internet, and neural networks. It is both theoretically and practically important to estimate the intensity of direct connections between variables.
View Article and Find Full Text PDFThe Shannon information content is a valuable numerical characteristic of probability distributions. The problem of estimating the information content from an observed dataset is very important in the fields of statistics, information theory, and machine learning. The contribution of the present paper is in proposing information estimators, and showing some of their applications.
View Article and Find Full Text PDFBiosci Biotechnol Biochem
September 2013
Remarkable progress has been made in genome science during the past decade, but understanding of genomes of eukaryotes is far from complete. We have created DNA flexibility maps of the human, mouse, fruit fly, and nematode chromosomes. The maps revealed that all of these chromosomes have markedly flexible DNA regions (We named them SPIKEs).
View Article and Find Full Text PDFThe Bradley-Terry model is a statistical representation for one's preference or ranking data by using pairwise comparison results of items. For estimation of the model, several methods based on the sum of weighted Kullback-Leibler divergences have been proposed from various contexts. The purpose of this letter is to interpret an estimation mechanism of the Bradley-Terry model from the viewpoint of flatness, a fundamental notion used in information geometry.
View Article and Find Full Text PDFMolecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run.
View Article and Find Full Text PDFReducing the dimensionality of high-dimensional data without losing its essential information is an important task in information processing. When class labels of training data are available, Fisher discriminant analysis (FDA) has been widely used. However, the optimality of FDA is guaranteed only in a very restricted ideal circumstance, and it is often observed that FDA does not provide a good classification surface for many real problems.
View Article and Find Full Text PDFGiven a set of rating data for a set of items, determining preference levels of items is a matter of importance. Various probability models have been proposed to solve this task. One such model is the Plackett-Luce model, which parameterizes the preference level of each item by a real value.
View Article and Find Full Text PDFWe discuss robustness against mislabeling in multiclass labels for classification problems and propose two algorithms of boosting, the normalized Eta-Boost.M and Eta-Boost.M, based on the Eta-divergence.
View Article and Find Full Text PDFPrevious psychological studies have shown that musical chords primed by Western musical scale in a tonal and modal schema are perceived in a hierarchy of stability. We investigated such priming effects on auditory magnetic responses to tonic-major and submediant-minor chords preceded by major scales and tonic-minor and submediant-major chords preceded by minor scales. Musically trained subjects participated in the experiment.
View Article and Find Full Text PDFBoosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied.
View Article and Find Full Text PDFNeural Comput
November 2005
By employing the L1 or Linfinity norms in maximizing margins, support vector machines (SVMs) result in a linear programming problem that requires a lower computational load compared to SVMs with the L2 norm. However, how the change of norm affects the generalization ability of SVMs has not been clarified so far except for numerical experiments. In this letter, the geometrical meaning of SVMs with the Lp norm is investigated, and the SVM solutions are shown to have rather little dependency on p.
View Article and Find Full Text PDFWe aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a label set. We propose two versions of U-Boost learning algorithms by taking account of whether the domain is restricted to the space of probability functions.
View Article and Find Full Text PDFNatural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, we deal with the generalization performances of the natural gradient method.
View Article and Find Full Text PDFAn adaptive on-line algorithm extending the learning of learning idea is proposed and theoretically motivated. Relying only on gradient flow information it can be applied to learning continuous functions or distributions, even when no explicit loss function is given and the Hessian is not available. The framework is applied for unsupervised and supervised learning.
View Article and Find Full Text PDF