Departmental Papers (CIS)

Date of this Version

December 2001

Document Type

Conference Paper

Comments

Copyright MIT Press. Postprint version. Published in Advances in Neural Information Processing Systems 14, Volume 2, pages 889-896. Proceedings of the 15th annual Neural Information Processing Systems (NIPS) conference, held in British Columbia, Canada, from 3-8 December 2001.

Abstract

High dimensional data that lies on or near a low dimensional manifold can be described by a collection of local linear models. Such a description, however, does not provide a global parameterization of the manifold—arguably an important goal of unsupervised learning. In this paper, we show how to learn a collection of local linear models that solves this more difficult problem. Our local linear models are represented by a mixture of factor analyzers, and the “global coordination” of these models is achieved by adding a regularizing term to the standard maximum likelihood objective function. The regularizer breaks a degeneracy in the mixture model’s parameter space, favoring models whose internal coordinate systems are aligned in a consistent way. As a result, the internal coordinates change smoothly and continuously as one traverses a connected path on the manifold—even when the path crosses the domains of many different local models. The regularizer takes the form of a Kullback-Leibler divergence and illustrates an unexpected application of variational methods: not to perform approximate inference in intractable probabilistic models, but to learn more useful internal representations in tractable ones.

Share

COinS
 

Date Posted: 11 September 2005