Departmental Papers (CIS)

Date of this Version

10-1-2001

Document Type

Journal Article

Comments

Postprint version. Published in Computer Speech & Language Volume 16, Issue 1, January 2002, Pages 69-88
Publisher URL: http://dx.doi.org/10.1006/csla.2001.0184

Abstract

We survey the use of weighted finite-state transducers (WFSTs) in speech recognition. We show that WFSTs provide a common and natural representation for hidden Markov models (HMMs), context-dependency, pronunciation dictionaries, grammars, and alternative recognition outputs. Furthermore, general transducer operations combine these representations flexibly and efficiently. Weighted determinization and minimization algorithms optimize their time and space requirements, and a weight pushing algorithm distributes the weights along the paths of a weighted transducer optimally for speech recognition. As an example, we describe a North American Business News (NAB) recognition system built using these techniques that combines the HMMs, full cross-word triphones, a lexicon of 40 000 words, and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real-time on a very simple decoder. In another example, we show that the same techniques can be used to optimize lattices for second-pass recognition. In a third example, we show how general automata operations can be used to assemble lattices from different recognizers to improve recognition performance.

Keywords

weighted finite-state transducers, WFST, speech recognition

Share

COinS
 

Date Posted: 27 September 2004

This document has been peer reviewed.