Incidental Decomposition for Complex Reasoning
Degree type
Graduate group
Discipline
Subject
decomposition
incidental supervision
language model
reasoning
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
The primary impetus for developing Natural Language Processing (NLP) systems is facilitating effective human-computer interaction via natural language. This necessitates emulating human cognitive processes that involve complex reasoning with decomposed understanding. Compared with humans, who continually learn from daily experiences, decompose decision-making processes, and form new associations, machine learning systems often have difficulty doing the same because of the lack of supervision signals that provide the same level of granularity in reasoning processes and explanations that humans naturally form.One straightforward solution would be to ask humans to annotate decomposed reasoning processes, which would be extremely cost-ineffective and non-comprehensive. As an alternative, I propose using incidental decomposition, which refers to decomposed signals that can be automatically acquired from existing resources and contain finer and more structural details than end-task labels. Incidental decomposition can more effectively emulate human cognition and improve model performances on complex reasoning, as I will demonstrate through a series of works in this thesis. I first introduce an exploratory work of applying incidental decomposition at inference time for fine-grained entity-typing. Following inference-time methods, I show two use cases of incidental decomposition as supervision signals for temporal reasoning and argue its effectiveness for in-domain applications. I will then detail methods that make incidental decomposition generalizable and domain-agnostic. Finally, I will discuss the long-term significance of incidental decomposition in the era of large language models.