Departmental Papers (CIS)

Date of this Version

June 2003

Document Type

Journal Article

Comments

Postprint version. Published in Journal of Machine Learning Research, Vol. 4, June 2003, pages 119-155.
Publisher URL: http://www.jmlr.org/papers/v4/saul03a.html

Abstract

The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE--though capable of generating highly nonlinear embeddings--are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm’s performance--both successes and failures--and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.

Keywords

algorithms, experimentation, performance, theory

Share

COinS
 

Date Posted: 13 October 2004

This document has been peer reviewed.