Statistics Papers

Document Type

Journal Article

Date of this Version

2009

Publication Source

Proceedings of the 26th Annual International Conference on Machine Learning

Volume

ICML '09

Start Page

129

Last Page

136

DOI

10.1145/1553374.1553391

Abstract

Clustering data in high-dimensions is believed to be a hard problem in general. A number of efficient clustering algorithms developed in recent years address this problem by projecting the data into a lower-dimensional subspace, e.g. via Principal Components Analysis (PCA) or random projections, before clustering. Such techniques typically require stringent requirements on the separation between the cluster means (in order for the algorithm to be be successful).

Here, we show how using multiple views of the data can relax these stringent requirements. We use Canonical Correlation Analysis (CCA) to project the data in each view to a lower-dimensional subspace. Under the assumption that conditioned on the cluster label the views are uncorrelated, we show that the separation conditions required for the algorithm to be successful are rather mild (significantly weaker than those of prior results in the literature). We provide results for mixture of Gaussians, mixtures of log concave distributions, and mixtures of product distributions.

Comments

At the time of publication, author Sham M. Kakade was affiliated with Toyota Technological Institute at Chicago. Currently, he is a faculty member at the Statistics Department at the University of Pennsylvania.

Share

COinS
 

Date Posted: 27 November 2017