PREDICTING EVERYDAY HUMAN JUDGMENT FROM NATURAL LANGUAGE

Loading...
Thumbnail Image
Degree type
Doctor of Philosophy (PhD)
Graduate group
Psychology
Discipline
Psychology
Subject
Funder
Grant number
License
Copyright date
2022
Distributor
Related resources
Author
Zou, Wanling
Contributor
Abstract

Constructing a good knowledge representation is an essential step to build a computational model to predict and understand human behaviors. Thus, it has long been a focal interest of many research fields. Recent advances in computer science have made it feasible to derive richer and more robust mental representations of natural objects in a human-like manner compared to those developed by traditional psychometric approaches. In 21 studies involving numerical, category, and social judgments, we test the applicability and the adequacy of knowledge representations trained by the Word2Vec algorithm (Mikolov et al., 2013) on Google News articles to predict human everyday judgments of naturalistic objects. In Chapter 1, we predict human numerical estimation, identify the sources of errors in human estimation, and uncover the psychological underpinnings of human judgment. In Chapter 2, we use the Word2Vec knowledge representations as inputs to the Generalized Context Model (Nosofsky, 1984) to predict human learning performance under different learning environments. In Chapter 3, we combine theories of social inference and recommendation algorithms to predict human predictions about other people. Together, these studies showcase that the Word2Vec knowledge representations well approximate human knowledge and are suitable to serve as inputs to various models to predict human judgments of naturalistic objects and concepts. By integrating the techniques from computational linguistics with models from cognitive science, the three chapters scale up the application of psychological theories to real-world domains and inform behavioral interventions for better judgments and decisions.

Advisor
Bhatia, Sudeep
Date of degree
2022
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Recommended citation