37 results match your criteria: "Neural Networks Research Centre[Affiliation]"
IEEE Trans Neural Netw
October 2012
Neural Networks Research Centre, Helsinki University of Technology, Helsinki FIN-02015, Finland.
Blind separation of source signals usually relies either on the nonGaussianity of the signals or on their linear autocorrelations. A third approach was introduced by Matsuoka et al. (1995), who showed that source separation can be performed by using the nonstationarity of the signals, in particular the nonstationarity of their variances.
View Article and Find Full Text PDFIEEE Trans Neural Netw
June 2010
Neural Networks Research Centre, Helsinki University of Technology, Espoo, Finland.
We introduce a method for deriving a metric, locally based on the Fisher information matrix, into the data space. A self-organizing map (SOM) is computed in the new metric to explore financial statements of enterprises. The metric measures local distances in terms of changes in the distribution of an auxiliary random variable that reflects what is important in the data.
View Article and Find Full Text PDFIEEE Trans Neural Netw
October 2012
Neural Networks Research Centre, Helsinki University of Technology, Helsinki, Finland.
The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quantitative analysis of the map and the data, similar units need to be grouped, i.
View Article and Find Full Text PDFIEEE Trans Neural Netw
October 2012
Neural Networks Research Centre, Helsinki University of Technology, Espoo, Finland.
This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the self-organizing map (SOM) algorithm. As the feature vectors for the documents statistical representations of their vocabularies are used.
View Article and Find Full Text PDFNeural Netw
October 2006
Helsinki University of Technology, Neural Networks Research Centre, P.O. Box 5400, FI-02015 HUT, Finland.
The Self-Organizing Map (SOM) algorithm was developed for the creation of abstract-feature maps. It has been accepted widely as a data-mining tool, and the principle underlying it may also explain how the feature maps of the brain are formed. However, it is not correct to use this algorithm for a model of pointwise neural projections such as the somatotopic maps or the maps of the visual field, first of all, because the SOM does not transfer signal patterns: the winner-take-all function at its output only defines a singular response.
View Article and Find Full Text PDFIEEE Trans Neural Netw
May 2006
Neural Networks Research Centre, Helsinki University of Technology, Helsinki 02015 HUT, Finland.
In this paper, we enhance and analyze the Evolving Tree (ETree) data analysis algorithm. The suggested improvements aim to make the system perform better while still maintaining the simple nature of the basic algorithm. We also examine the system's behavior with many different kinds of tests, measurements and visualizations.
View Article and Find Full Text PDFInt J Neural Syst
December 2005
Neural Networks Research Centre, Helsinki University of Technology, P. O. Box 5400, FI-02015 HUT, Finland.
Practical data analysis often encounters data sets with both relevant and useless variables. Supervised variable selection is the task of selecting the relevant variables based on some predefined criterion. We propose a robust method for this task.
View Article and Find Full Text PDFNeural Netw
January 2005
Neural Networks Research Centre, Helsinki University of Technology, PO Box 5400, FI-02015 HUT, Finland.
In this work an online algorithm is presented for the construction of the self-organizing map (SOM) of symbol strings. Each node of the SOM grid is associated with a model string which is a variable-vector sequence. Smooth interpolation method is applied in the training which performs simultaneous adaptation of the symbol content and the length of the model string.
View Article and Find Full Text PDFNeural Netw
January 2005
Neural Networks Research Centre, Helsinki University of Technology, PO Box 5400, FI-02015 HUT, Finland.
We have earlier introduced a principle for learning metrics, which shows how metric-based methods can be made to focus on discriminative properties of data. The main applications are in supervising unsupervised learning to model interesting variation in data, instead of modeling all variation as plain unsupervised learning does. The metrics are derived by approximations to an information-geometric formulation.
View Article and Find Full Text PDFIEEE Trans Neural Netw
July 2004
Neural Networks Research Centre, Helsinki University of Technology, FI-02015 HUT, Finland.
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model.
View Article and Find Full Text PDFIEEE Trans Neural Netw
May 2004
Neural Networks Research Centre, Helsinki University of Technology, 02015 HUT, Finland.
Changes in a dynamical process are often detected by monitoring selected indicators directly obtained from the process observations, such as the mean values or variances. Standard change detection algorithms such as the Shewhart control charts or the cumulative sum (CUSUM) algorithm are often based on such first- and second-order statistics. Much better results can be obtained if the dynamical process is properly modeled, for example by a nonlinear state-space model, and then the accuracy of the model is monitored over time.
View Article and Find Full Text PDFNeural Comput
September 2004
Neural Networks Research Centre, Helsinki University of Technology, 02015 HUT, Finland.
The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero.
View Article and Find Full Text PDFNeuroimage
July 2004
Neural Networks Research Centre, Helsinki University of Technology, Helsinki, Finland.
Recently, independent component analysis (ICA) has been widely used in the analysis of brain imaging data. An important problem with most ICA algorithms is, however, that they are stochastic; that is, their results may be somewhat different in different runs of the algorithm. Thus, the outputs of a single run of an ICA algorithm should be interpreted with some reserve, and further analysis of the algorithmic reliability of the components is needed.
View Article and Find Full Text PDFLogoped Phoniatr Vocol
February 2004
Neural Networks Research Centre, Helsinki University of Technology, P.O. Box 5400, 02015-HUT, Finland.
Vocalisations of six Macaca arctoides that were categorised according to their social context and judgements of naive human listeners as expressions of plea/submission, anger, fear, dominance, contentment and emotional neutrality, were compared with vowel samples extracted from simulations of emotional-motivational connotations by the Finnish name Saara and English name Sarah. The words were spoken by seven Finnish and 13 English women. Humans and monkeys resembled each other in the following respects.
View Article and Find Full Text PDFBMC Bioinformatics
October 2003
Neural Networks Research Centre, Helsinki University of Technology P,O, Box 9800, FIN-02015 HUT, Finland.
Background: Conventionally, the first step in analyzing the large and high-dimensional data sets measured by microarrays is visual exploration. Dendrograms of hierarchical clustering, self-organizing maps (SOMs), and multidimensional scaling have been used to visualize similarity relationships of data samples. We address two central properties of the methods: (i) Are the visualizations trustworthy, i.
View Article and Find Full Text PDFNetwork
August 2003
Neural Networks Research Centre, Helsinki University of Technology, PO Box 9800, 02015 HUT, Finland.
We present a two-layer dynamic generative model of the statistical structure of natural image sequences. The second layer of the model is a linear mapping from simple-cell outputs to pixel values, as in most work on natural image statistics. The first layer models the dependencies of the activity levels (amplitudes or variances) of the simple cells, using a multivariate autoregressive model.
View Article and Find Full Text PDFPhilos Trans A Math Phys Eng Sci
June 2003
Helsinki University of Technology, Neural Networks Research Centre, PO Box 5400, 02015 HUT, Finland.
J Opt Soc Am A Opt Image Sci Vis
July 2003
Neural Networks Research Centre, Helsinki University of Technology, P.O. Box 9800, FIN-02015 HUT, Finland.
Recently, different models of the statistical structure of natural images have been proposed. These models predict properties of biological visual systems and can be used as priors in Bayesian inference. The fundamental model is independent component analysis, which can be estimated by maximization of the sparsenesses of linear filter outputs.
View Article and Find Full Text PDFSleep
June 2003
Neural Networks Research Centre, University of Technology Espoo, Finland.
Objectives: To develop a method for automatic detection of blinks in electrooculograms and to evaluate reliability of blink rate as an indicator of wake and sleep in subjects with developmental brain disorders.
Design: Categorization of wake and sleep by blink rate was compared with visual sleep scoring of the polysomnograms.
Setting: Ambulatory polysomnographic recordings at home or in the sleep laboratory.
Neural Comput
March 2003
Neural Networks Research Centre, Helsinki University of Technology, 02015 HUT, Finland.
Recently, statistical models of natural images have shown the emergence of several properties of the visual cortex. Most models have considered the nongaussian properties of static image patches, leading to sparse coding or independent component analysis. Here we consider the basic time dependencies of image sequences instead of their nongaussianity.
View Article and Find Full Text PDFNeural Netw
December 2002
Helsinki University of Technology, Neural Networks Research Centre, Finland.
Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted.
View Article and Find Full Text PDFNeural Netw
December 2002
Neural Networks Research Centre, Helsinki University of Technology, Finland.
The self-organizing map (SOM) represents an open set of input samples by a topologically organized, finite set of models. In this paper, a new version of the SOM is used for the clustering, organization, and visualization of a large database of symbol sequences (viz. protein sequences).
View Article and Find Full Text PDFProc Natl Acad Sci U S A
July 2002
Neural Networks Research Centre, Helsinki University of Technology, P.O. Box 5400, FIN-02015 Hut, Espoo, Finland.
An explanation, based on simple analysis of the spatiotemporal variations of the visual environment, is given to the automatic capture and focusing of visual attention. It is assumed that the transmittance for the sensory signals is modulated by separate control circuits that sample input from the same area of the visual field but at a lower resolution. When these circuits detect significant spatial and/or temporal variations, they "open gates" for the more accurate information arising from the same area.
View Article and Find Full Text PDFVision Res
June 2002
Neural Networks Research Centre, Helsinki University of Technology, P.O. Box 9800, FIN-02015 HUT, Finland.
An important approach in visual neuroscience considers how the function of the early visual system relates to the statistics of its natural input. Previous studies have shown how many basic properties of the primary visual cortex, such as the receptive fields of simple and complex cells and the spatial organization (topography) of the cells, can be understood as efficient coding of natural images. Here we extend the framework by considering how the responses of complex cells could be sparsely represented by a higher-order neural layer.
View Article and Find Full Text PDFNeural Comput
January 2002
Neural Networks Research Centre, Helsinki University of Technology, FIN-02015 HUT, Finland.
We study the problem of learning groups or categories that are local in the continuous primary space but homogeneous by the distributions of an associated auxiliary random variable over a discrete auxiliary space. Assuming that variation in the auxiliary space is meaningful, categories will emphasize similarly meaningful aspects of the primary space. From a data set consisting of pairs of primary and auxiliary items, the categories are learned by minimizing a Kullback-Leibler divergence-based distortion between (implicitly estimated) distributions of the auxiliary data, conditioned on the primary data.
View Article and Find Full Text PDF