Artificial language learning experiments provide a unique opportunity to observe learning under controlled conditions. We cannot, however, observe what learning strategy participants use; we can only carefully design the language and observe the response. This poses an inference problem that I name "the poverty of the experiment." I use computational learning models to address this inference problem, using data from an artificial grammar learning study (Saffran 2001) in which the authors conclude that participants learned hierarchical structure from distributional cues. Simulations show that that learning hierarchical structure is not required to pass the tests administered in those experiments and that a heuristic learner is the best fit for the observed human performance. Artificial language learning experiments cannot in themselves provide evidence for a particular learning strategy; they must be paired with appropriate modeling work to confirm that an implementation of a proposed learning strategy actually produces the expected results.
"You Can’t Get There from Here: On Interpreting Learning Experiments,"
University of Pennsylvania Working Papers in Linguistics:
1, Article 12.
Available at: http://repository.upenn.edu/pwpl/vol19/iss1/12