8 results match your criteria: "The University of Paisley[Affiliation]"
We introduce a set of clustering algorithms whose performance function is such that the algorithms overcome one of the weaknesses of K-means, its sensitivity to initial conditions which leads it to converge to a local optimum rather than the global optimum. We derive online learning algorithms and illustrate their convergence to optimal solutions which K-means fails to find. We then extend the algorithm by underpinning it with a latent space which enables a topology preserving mapping to be found.
View Article and Find Full Text PDFJ Nurs Manag
March 2008
The University of Paisley, Paisley, UK.
Aim: To explore what life was like for frail older people, classed as 'delayed discharges'.
Background: Delayed discharge or 'bed blocking' is when a patient is inappropriately occupying a hospital bed. Most delayed discharges are frail older people who are waiting until a care home bed is available for them to move to.
Int J Neural Syst
June 2005
Applied Computational Intelligence Research Unit, The University of Paisley, Scotland.
The use of self-organizing maps to analyze data often depends on finding effective methods to visualize the SOM's structure. In this paper we propose a new way to perform that visualization using a variant of Andrews' Curves. Also we show that the interaction between these two methods allows us to find sub-clusters within identified clusters.
View Article and Find Full Text PDFNeural Netw
April 2004
Applied Computational Intelligence Research Unit, The University of Paisley, Paisley, Scotland, UK.
Principal Curves are extensions of Principal Component Analysis and are smooth curves, which pass through the middle of a data set. We extend the method so that, on pairs of data sets which have underlying non-linear correlations, we have pairs of curves which go through the 'centre' of data sets in such a way that the non-linear correlations between the data sets are captured. The core of the method is to iteratively average the current local projections of the data points which produces an increasingly sparsified set of nodes.
View Article and Find Full Text PDFNeural Netw
March 2004
Applied Computational Intelligence Research Unit, The University of Paisley, Paisley, Scotland PA1 2BE, UK.
We review a recent neural implementation of Canonical Correlation Analysis and show, using ideas suggested by Ridge Regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets.
View Article and Find Full Text PDFNeural Netw
December 1999
Department of Computing and Information Systems, Applied Computational Intelligence Research Unit, The University of Paisley, Paisley, UK
We derive a new method of performing Canonical Correlation Analysis with Artificial Neural Networks. We demonstrate the network's capabilities on artificial data and then compare its effectiveness with that of a standard statistical method on real data. We demonstrate the capabilities of the network in two situations where standard statistical techniques are not effective: where we have correlations stretching over three data sets and where the maximum nonlinear correlation is greater than any linear correlation.
View Article and Find Full Text PDFInt J Neural Syst
October 2000
Applied Computational Intelligence Research Unit, The University of Paisley, Scotland.
We review a neural implementation of the statistical technique of Canonical Correlation Analysis (CCA) and extend it to nonlinear CCA. We then derive the method of kernel-based CCA and compare these two methods on real and artificial data sets before using both on the Blind Separation of Sources.
View Article and Find Full Text PDFNetwork
May 1996
Department of Computing and Information Systems, The University of Paisley, Paisley, PA1 2BE, UK.
We use a simple network which uses negative feedback of activation and simple Hebbian learning to self-organize in such a way as to produce a hierarchical classification network. By adding neighbourhood relations to its learning rule, we create a feature map which has the property of retaining the angular properties of the input data, i.e.
View Article and Find Full Text PDF