Now showing 1 - 2 of 2
PublicationDiscovering Reduced-Order Dynamical Models From Data(2022-04-27) Qian, William; Chaudhari, Pratik AThis work explores theoretical and computational principles for data-driven discovery of reduced-order models of physical phenomena. We begin by describing the theoretical underpinnings of multi-parameter models through the lens of information geometry. We then explore the behavior of paradigmatic models in statistical physics, including the diffusion equation and the Ising model. In particular, we explore how coarse-graining a system affects the local and global geometry of a “model manifold” which is the set of all models that could be fit using data from the system. We emphasize connections of this idea to ideas in machine learning. Finally, we employ coarse-graining techniques to discover partial differential equations from data. We extend the approach to modeling ensembles of microscopic observables, and attempt to learn the macroscopic dynamics underlying such systems. We conclude each analysis with a computational exploration of how the geometry of learned model manifolds changes as the observed data undergoes different levels of coarse-graining. PublicationHow Occam’s Razor Guides Human Inference(2022-11-21) Piasini, Eugenio; Liu, Shuze; Chaudhari, Pratik; Balasubramanian, Vijay; Gold, Joshua IOccam’s razor is the principle stating that, all else being equal, simpler explanations for a set of observations are preferred over more complex ones. This idea is central to multiple formal theories of statistical model selection and is posited to play a role in human perception and decision-making, but a general, quantitative account of the specific nature and impact of complexity on human decision-making is still missing. Here we use preregistered experiments to show that, when faced with uncertain evidence, human subjects bias their decisions in favor of simpler explanations in a way that can be quantified precisely using the framework of Bayesian model selection. Specifically, these biases, which were also exhibited by artificial neural networks trained to optimize performance on comparable tasks, reflect an aversion to complex explanations (statistical models of data) that depends on specific geometrical features of those models, namely their dimensionality, boundaries, volume, and curvature. Moreover, the simplicity bias persists for human, but not artificial, subjects even for tasks for which the bias is maladaptive and can lower overall performance. Taken together, our results imply that principled notions of statistical model complexity have direct, quantitative relevance to human and machine decision-making and establish a new understanding of the computational foundations, and behavioral benefits, of our predilection for inferring simplicity in the latent properties of our complex world.