Encoding structured output values
Many of the Natural Language Processing tasks that we would like to model with machine learning techniques generate structured output values, such as trees, lists, or groupings. These structured output problems can be modeled by decomposing them into a set of simpler sub-problems, with well-defined and well-constrained interdependencies between sub-problems. However, the effectiveness of this approach depends to a large degree on exactly how the problem is decomposed into sub-problems; and on how those sub-problems are divided into equivalence classes.^ The notion of output encoding can be used to examine the effects of problem decomposition on learnability for specific tasks. These effects can be divided into two general classes: local effects and global effects. Local effects, which influence the difficulty of learning individual sub-problems, depend primarily on the coherence of the classes defined by individual output tags. Global effects, which determine the model's ability to learn long-distance dependencies, depend on the information content of the output tags.^ Using a canonical encoding as a reference point, we can define additional encodings as reversible transformations from canonical encoded structures to a new set of encoded structures. This allows us to define a space of potential encodings (and by extension, a space of potential problem decompositions). Using search methods, we can then analyze and improve upon existing problem decompositions.^ In this dissertation, I apply automatic and semi-automatic methods to the problem of finding optimal problem decompositions, in the context of five specific systems (three sequence prediction systems and two semantic role labeling systems). Additionally, I show how linear and log-linear voting can be used to combine structured prediction models that use different problem decompositions, and evaluate the effectiveness of these combined systems. ^
Loper, Edward, "Encoding structured output values" (2008). Dissertations available from ProQuest. AAI3346159.