ESSAYS ON LEARNING IN ECONOMIC THEORY

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Economics
Discipline
Economics
Subject
Berk-Nash equilibrium
misspecified learning
model switching
overconfidence
self-confirming equilibrium
Funder
Grant number
License
Copyright date
2023
Distributor
Related resources
Author
Ba, Cuimin
Contributor
Abstract

This dissertation studies the consequences and the foundations of learning with misspecified models. Chapter 1 studies the long-term interaction between two overconfident agents who choose how much effort to exert while learning about their environment. Overconfidence causes agents to underestimate either a common fundamental, such as the underlying quality of their project, or their counterpart's ability, to justify their worse-than-expected performance. We show that in many settings, agents create informational externalities for each other. When informational externalities are positive, the agents' learning processes are mutually-reinforcing: one agent best responding to his own overconfidence causes the other agent to reach a more distorted belief and take more extreme actions, generating a positive feedback loop. The opposite pattern, mutually-limiting learning, arises when informational externalities are negative. We also show that in our multi-agent environment overconfidence can lead to Pareto improvement in welfare. Finally, we prove that under certain conditions, agents' beliefs and effort choices converge to a Berk-Nash equilibrium. Chapter 2 studies which misspecified models are likely to persist when individuals also entertain alternative models. Consider an agent who uses her model to learn the relationship between action choices and outcomes. The agent exhibits sticky model switching, captured by a threshold rule such that she switches to an alternative model when it is a sufficiently better fit for the data she observes. The main result provides a characterization of whether a model persists based on two key features that are straightforward to derive from the primitives of the learning environment, namely, the model's asymptotic accuracy in predicting the equilibrium pattern of observed outcomes and the `tightness' of the prior around this equilibrium. I show that misspecified models can be robust in that they persist against a wide range of competing models---including the correct model---despite individuals observing an infinite amount of data. Moreover, simple misspecified models with entrenched priors can be even more robust than correctly specified models. I use this characterization to provide a learning foundation for the persistence of systemic biases in two applications.

Advisor
Mailath, George, GM
Bohren, Aislinn, AB
Date of degree
2023
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation