Hauser, Daniel N.

Email Address
Research Projects
Organizational Units
Research Interests

Search Results

Now showing 1 - 1 of 1
  • Publication
    Learning From Strategically Controlled Information
    (2017-01-01) Hauser, Daniel N.
    In the first chapter, ``Promoting a Reputation for Quality,'' I model a firm that manages its reputation for selling high quality products by investing in the quality of the product and by controlling the information consumers observe. As in \citet{BMTV}, quality is persistent, and evolves stochastically over time. Consumers do not observe product quality or the firm's actions directly, instead they form beliefs about the quality of the firm's product based on the information they observe. I focus on two cases, the good news case, where the firm can promote its product by releasing positive information, and the bad news case, where the firm can choose to censor negative information, and characterize Markov perfect equilibria. In the good news case, promotion and investment are complements. The firm has incentives to invest because it can then promote its product. the firm does not invest in quality or promote at high reputation, invests and promotes at low reputations, and promotes but does not invest at intermediate reputations. This intermediate region reduces the firm’s incentives to invest in quality, relative to what would happen if information was exogenous. But reputation effects are persistent. The firm will always eventually have incentives to invest in quality and renew its reputation. In contrast, in the bad news case censorship and investment are substitutes. The firm can either invest to hide negative information about its product or censor this bad news. Unless censorship is sufficiently expensive, reputation effects break down and the firm never invests in the quality of its product. In the second chapter, ``Bounded Rationality and Learning: A Framework and A Robustness Result'' (joint with Aislinn Bohren), we investigate how consumers learn from the actions of others. We consider what happens in a social learning environment when agents have potentially misspecified models of the world. Agents may misinterpret information they see about the world, and may also misinterpret how others view the world. We develop a set of tools that allow us to analyze asymptotic learning outcomes in the presence of model misspecification. This framework allows us to consider agents with a variety of biases, including the level-k models, confirmation bias, partisan bias, and models where agents over or under-weight the information contained in their private signals.