Assessing Credibility In Subjective Probability Judgment

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Psychology
Discipline
Subject
Credibility
Forecasting
Judgment and Decision Making
Subjective Probability Judgment
Psychology
Social and Behavioral Sciences
Funder
Grant number
License
Copyright date
2019-08-27T20:19:00-07:00
Distributor
Related resources
Contributor
Abstract

Subjective probability judgments (SPJs) are an essential component of decision making under uncertainty. Yet, research shows that SPJs are vulnerable to a variety of errors and biases. From a practical perspective, this exposes decision makers to risk: if SPJs are (reasonably) valid, then expectations and choices will be rational; if they are not, then expectations may be erroneous and choices suboptimal. However, existing methods for evaluating SPJs depend on information that is typically not available to decision makers (e.g., ground truth; correspondence criteria). To address this issue, I develop a method for evaluating SPJs based on a construct I call credibility. At the conceptual level, credibility describes the relationship between an individual’s SPJs and the most defensible beliefs that one could hold, given all available information. Thus, coefficients describing credibility (i.e., “credibility estimates”) ought to reflect an individual’s tendencies towards error and bias in judgment. To determine whether empirical models of credibility can capture this information, this dissertation examines the reliability, validity, and utility of credibility estimates derived from a model that I call the linear credibility framework. In Chapter 1, I introduce the linear credibility framework and demonstrate its potential for validity and utility in a proof-of-concept simulation. In Chapter 2, I apply the linear credibility framework to SPJs from three empirical sources and examine the reliability and validity of credibility estimates as predictors of judgmental accuracy (among other measures of “good” judgment). In Chapter 3, I use credibility estimates from the same three sources to recalibrate and improve SPJs (i.e., increase accuracy) out-of-sample. In Chapter 4, I discuss the robustness of empirical models of credibility and present two studies in which I use exploratory research methods to (a) tailor the linear credibility framework to the data at hand; and (b) boost performance. Across nine studies, I conclude that the linear credibility framework is a robust (albeit imperfect) model of credibility that can provide reliable, valid, and useful estimates of credibility. Because the linear credibility framework is an intentionally weak model, I argue that these results represent a lower-bound for the performance of empirical models of credibility, more generally.

Advisor
Jonathan Baron
Date of degree
2019-01-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation