Gold, Joshua I

Email Address
ORCID
Disciplines
Research Projects
Organizational Units
Position
Introduction
Research Interests

Search Results

Now showing 1 - 2 of 2
  • Publication
    Effect of Geometric Complexity on Intuitive Model Selection
    (2021-10-01) Piasini, Eugenio; Balasubramanian, Vijay; Gold, Joshua I
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are to be preferred to more complex ones. This idea can be made precise in the context of statistical inference, where the same quantitative notion of complexity of a statistical model emerges naturally from different approaches based on Bayesian model selection and information theory. The broad applicability of this mathematical formulation suggests a normative model of decision-making under uncertainty: complex explanations should be penalized according to this common measure of complexity. However, little is known about if and how humans intuitively quantify the relative complexity of competing interpretations of noisy data. Here we measure the sensitivity of naive human subjects to statistical model complexity. Our data show that human subjects bias their decisions in favor of simple explanations based not only on the dimensionality of the alternatives (number of model parameters), but also on finer-grained aspects of their geometry. In particular, as predicted by the theory, models intuitively judged as more complex are not only those with more parameters, but also those with larger volume and prominent curvature or boundaries. Our results imply that principled notions of statistical model complexity have direct quantitative relevance to human decision-making.
  • Publication
    How Occam’s Razor Guides Human Inference
    (2022-11-21) Piasini, Eugenio; Liu, Shuze; Chaudhari, Pratik; Balasubramanian, Vijay; Gold, Joshua I
    Occam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are preferred over more complex ones. This idea is central to multiple formal theories of statistical model selection and is posited to play a role in human perception and decision-making, but a general, quantitative account of the specific nature and impact of complexity on human decision-making is still missing. Here we use preregistered experiments to show that, when faced with uncertain evidence, human subjects bias their decisions in favor of simpler explanations in a way that can be quantified precisely using the framework of Bayesian model selection. Specifically, these biases, which were also exhibited by artificial neural networks trained to optimize performance on comparable tasks, reflect an aversion to complex explanations (statistical models of data) that depends on specific geometrical features of those models, namely their dimensionality, boundaries, volume, and curvature. Moreover, the simplicity bias persists for human, but not artificial, subjects even for tasks for which the bias is maladaptive and can lower overall performance. Taken together, our results imply that principled notions of statistical model complexity have direct, quantitative relevance to human and machine decision-making and establish a new understanding of the computational foundations, and behavioral benefits, of our predilection for inferring simplicity in the latent properties of our complex world.