Date of this Version
Topic modeling is a generalization of clustering that posits that observations (words in a document) are generated by multiple latent factors (topics), as opposed to just one. The increased representational power comes at the cost of a more challenging unsupervised learning problem for estimating the topic-word distributions when only words are observed, and the topics are hidden.
This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of topic models, including Latent Dirichlet Allocation (LDA). For LDA, the procedure correctly recovers both the topic-word distributions and the parameters of the Dirichlet prior over the topic mixtures, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, called Excess Correlation Analysis, is based on a spectral decomposition of low-order moments via two singular value decompositions (SVDs). Moreover, the algorithm is scalable, since the SVDs are carried out only on k × k matrices, where k is the number of latent factors (topics) and is typically much smaller than the dimension of the observation (word) space.
The final publication is available at Springer via http://dx.doi.org/10.1007/s00453-014-9909-1.
topic models, mixture models, method of moments latent dirichlet allocation
Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S., & Liu, Y. (2015). A Spectral Algorithm for Latent Dirichlet Allocation. Algorithmica, 72 (1), 193-214. http://dx.doi.org/10.1007/s00453-014-9909-1
Date Posted: 27 November 2017
This document has been peer reviewed.