Statistics Papers

Document Type

Journal Article

Date of this Version

5-2015

Publication Source

Algorithmica

Volume

72

Issue

1

Start Page

193

Last Page

214

DOI

10.1007/s00453-014-9909-1

Abstract

Topic modeling is a generalization of clustering that posits that observations (words in a document) are generated by multiple latent factors (topics), as opposed to just one. The increased representational power comes at the cost of a more challenging unsupervised learning problem for estimating the topic-word distributions when only words are observed, and the topics are hidden.

This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of topic models, including Latent Dirichlet Allocation (LDA). For LDA, the procedure correctly recovers both the topic-word distributions and the parameters of the Dirichlet prior over the topic mixtures, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, called Excess Correlation Analysis, is based on a spectral decomposition of low-order moments via two singular value decompositions (SVDs). Moreover, the algorithm is scalable, since the SVDs are carried out only on k × k matrices, where k is the number of latent factors (topics) and is typically much smaller than the dimension of the observation (word) space.

Copyright/Permission Statement

The final publication is available at Springer via http://dx.doi.org/10.1007/s00453-014-9909-1.

Keywords

topic models, mixture models, method of moments latent dirichlet allocation

Share

COinS
 

Date Posted: 27 November 2017

This document has been peer reviewed.