André G. Mendonça
Profile Url: andr-g--mendona
Researcher at Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown
Nature Communications, 2020-06-02
In standard models of perceptual decision-making, noisy sensory evidence is considered to be the primary source of choice errors and the accumulation of evidence needed to overcome this noise gives rise to speed-accuracy tradeoffs. Here, we investigated how the history of recent choices and their outcomes interacts with these processes using a combination of theory and experiment. We found that the speed and accuracy of performance of rats on olfactory decision tasks could be best explained by a Bayesian model that combines reinforcement-based learning with accumulation of uncertain sensory evidence. This model predicted the specific pattern of trial history effects that were found in the data. The results suggest that learning is a critical factor contributing to speed-accuracy tradeoffs in decision-making and that task history effects are not simply biases but rather the signatures of an optimal learning strategy.
Proceedings of the National Academy of Sciences, 2019-11-15
Diffusion decision models (DDMs) are immensely successful models for decision-making under uncertainty and time pressure. In the context of perceptual decision making, these models typically start with two input units, organized in a neuron-antineuron pair. In contrast, in the brain, sensory inputs are encoded through the activity of large neuronal populations. Moreover, while DDMs are wired by hand, the nervous system must learn the weights of the network through trial and error. There is currently no normative theory of learning in DDMs and therefore no theory of how decision makers could learn to make optimal decisions in this context. Here, we derive the first such rule for learning a near-optimal linear combination of DDM inputs based on trial-by-trial feedback. The rule is Bayesian in the sense that it learns not only the mean of the weights but also the uncertainty around this mean in the form of a covariance matrix. In this rule, the rate of learning is proportional (resp. inversely proportional) to confidence for incorrect (resp. correct) decisions. Furthermore, we show that, in volatile environments, the rule predicts a bias towards repeating the same choice after correct decisions, with a bias strength that is modulated by the previous choice’s difficulty. Finally, we extend our learning rule to cases for which one of the choices is more likely a priori, which provides new insights into how such biases modulate the mechanisms leading to optimal decisions in diffusion models. Significance Statement Popular models for the tradeoff between speed and accuracy of everyday decisions usually assume fixed, low-dimensional sensory inputs. In contrast, in the brain, these inputs are distributed across larger populations of neurons, and their interpretation needs to be learned from feedback. We ask how such learning could occur and demonstrate that efficient learning is significantly modulated by decision confidence. This modulation predicts a particular dependency pattern between consecutive choices, and provides new insight into how a priori biases for particular choices modulate the mechanisms leading to efficient decisions in these models.