Center for Human Modeling and Simulation

Alignment by Agreement

Ben Taskar, University of Pennsylvania
Percy Liang, University of California - Berkeley
Dan Klein, University of California - Berkeley

Document Type Conference Paper

Abstract

We present an unsupervised approach to symmetric word alignment in which two simple asymmetric models are trained jointly to maximize a combination of data likelihood and agreement between the models. Compared to the standard practice of intersecting predictions of independently-trained models, joint training provides a 32% reduction in AER. Moreover, a simple and efficient pair of HMM aligners provides a 29% reduction in AER over symmetrized IBM model 4 predictions.

 

Date Posted: 11 July 2012