Date of this Version
This paper investigates whether people optimally value tools that reduce attention costs. We call these tools bandwidth enhancements (BEs) and characterize how demand for BEs varies with the pecuniary incentives to be attentive, under the null hypothesis of correct perceptions and optimal choice. We examine if the optimality conditions are satisfied in three experiments. The first is a field experiment (n = 1373) with an online education platform, in which we randomize incentives to complete course modules and incentives to utilize a plan-making tool to complete the modules. In the second experiment (n = 2306), participants must complete a survey in the future. We randomize survey-completion incentives and how long participants must wait to complete the survey, and we elicit willingness to pay for reminders. The third experiment (n = 1465) involves a psychometric task in which participants must identify whether there are more correct or incorrect mathematical equations in an image. We vary incentives for accuracy, elicit willingness to pay to reduce task difficulty, and examine the impact of learning and feedback. In all experiments, demand for reducing attention costs increases as incentives for accurate task completion increase. However, in all experiments-and across all conditions-our tests imply that this increase in demand is too small relative to the null of correct perceptions. These results suggest that people may be uncertain or systematically biased about their attention cost functions, and that experience and feedback do not necessarily eliminate bias.
attention, attention cost, optimality conditions
Working Paper Number
All findings, interpretations, and conclusions of this paper represent the views of the authors and not those of the Wharton School or the Pension Research Council. © 2022 Pension Research Council of the Wharton School of the University of Pennsylvania. All rights reserved.
We thank Andrew Caplin, Mark Dean, Xavier Gabaix, Stephen O'Connell, Devin Pope, three grant reviewers at the Russell Sage Foundation, and seminar and conference participants for helpful comments and advice. We thank Alexander Hirsch, Stephanie Nam, Laila Voss, and Caleb Wroblewski for excellent research assistance. We gratefully acknowledge Mike Walmsley and CodeAvengers.com for their support with the education experiment. We gratefully acknowledge research funding from the Russell Sage Foundation, Swarthmore College, the Boettner Center, the Wharton School, the Wharton Behavioral Lab, and the Alfred P. Sloan Foundation.
Date Posted: 06 April 2022