Clustering based on conditional distributions in an auxiliary space.

Neural Comput

Neural Networks Research Centre, Helsinki University of Technology, FIN-02015 HUT, Finland.

Published: January 2002

We study the problem of learning groups or categories that are local in the continuous primary space but homogeneous by the distributions of an associated auxiliary random variable over a discrete auxiliary space. Assuming that variation in the auxiliary space is meaningful, categories will emphasize similarly meaningful aspects of the primary space. From a data set consisting of pairs of primary and auxiliary items, the categories are learned by minimizing a Kullback-Leibler divergence-based distortion between (implicitly estimated) distributions of the auxiliary data, conditioned on the primary data. Still, the categories are defined in terms of the primary space. An online algorithm resembling the traditional Hebb-type competitive learning is introduced for learning the categories. Minimizing the distortion criterion turns out to be equivalent to maximizing the mutual information between the categories and the auxiliary data. In addition, connections to density estimation and to the distributional clustering paradigm are outlined. The method is demonstrated by clustering yeast gene expression data from DNA chips, with biological knowledge about the functional classes of the genes as the auxiliary data.

Download full-text PDF

Source
http://dx.doi.org/10.1162/089976602753284509DOI Listing

Publication Analysis

Top Keywords

auxiliary space
12
primary space
12
auxiliary data
12
auxiliary
8
distributions auxiliary
8
space
6
categories
6
data
6
primary
5
clustering based
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!