Seminar

Predictibe and Interpretable: using classic cognitive models and artificial neural networks to understand human learning and decision-making

Practical information
11 January 2024
2pm
Place

ENS, salle Langevin, 29 rue d'Ulm, 75005 Paris

LNC2

Quantitative models of behavior are a fundamental tool in cognitive science. Typically, models are hand-crafted to implement specific cognitive mechanisms. Such "classic" models are interpretable by design, but may provide poor fit to experimental data. Artificial neural networks (ANNs), on the contrary, can fit arbitrary datasets at the cost of opaque mechanisms. I will present research in the classic tradition that sheds light on the development of learning during childhood and the teen years, and some studies on hierarchical learning and abstraction. I will then touch on limitations of the classic approach and introduce a new hybrid approach that combines the predictive power of ANNs with the interpretability of classic models. We start with classic RL models and replace their components one-by-one with ANNs. We find that hybrid models can provide similar fit to fully-general ANNs, while retaining the interpretability of classic cognitive models: They reveal reward-based learning mechanisms in humans that differ from classic RL in striking ways. They also reveal mechanisms not contained in classic models, including separate reward-blind mechanisms, and the specific memory contents relevant to reward-based and reward-blind mechanisms. 

Affiche