Departmental Papers (CIS)

Date of this Version

6-2006

Document Type

Conference Paper

Comments

Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL '06). Association for Computational Linguistics, Stroudsburg, PA, USA, 104-111. DOI=10.3115/1220835.1220849 http://dx.doi.org/10.3115/1220835.1220849

© ACM, 2006. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, {(2006)} http://doi.acm.org/10.3115/1220835.1220849" Email permissions@acm.org

Abstract

We present an unsupervised approach to symmetric word alignment in which two simple asymmetric models are trained jointly to maximize a combination of data likelihood and agreement between the models. Compared to the standard practice of intersecting predictions of independently-trained models, joint training provides a 32% reduction in AER. Moreover, a simple and efficient pair of HMM aligners provides a 29% reduction in AER over symmetrized IBM model 4 predictions.

Share

COinS
 

Date Posted: 16 July 2012