Theoretical and Computational Neuroscience
Author: Josefina Catoni | Email: jcatoni@sinc.unl.edu.ar
Josefina Catoni1°, Domonkos Martos2°, Enzo Ferrante1°3°, Diego H. Milone1°, Ferenc Csikor2°, Balázs Meszéna2°, Gergő Orbán2°, Rodrigo Echeveste1°
1° Research Institute for Signals, Systems and Computational Intelligence, sinc(i), CONICET-UNL, Santa Fe, Argentina
2° Computational Systems Neuroscience Lab, Department of Computational Sciences, HUN-REN, Wigner Research Centre for Physics, Budapest, Hungary
3° Department of Computer Science, University of Buenos Aires, Buenos Aires, Argentina
Deep learning methods are increasingly becoming instrumental as modeling tools in computational neuroscience, employing optimality principles to build bridges between neural responses and perception or behavior. Developing models that adequately represent uncertainty is however challenging for deep learning methods, which often suffer from calibration problems. This constitutes a difficulty in particular when modeling cortical circuits in terms of Bayesian inference, beyond single point estimates such as the posterior mean or the maximum a posteriori. In this work we systematically studied uncertainty representations in latent representations of variational auto-encoders (VAEs), both in a perceptual task from natural images and in two other canonical tasks of computer vision, finding a poor alignment between uncertainty and informativeness or ambiguities in the images. We next showed how a novel approach which we call explaining-away variational auto-encoders (EA-VAEs), fixes these issues, producing meaningful reports of uncertainty in a variety of scenarios, including interpolation, image corruption, and even out-of-distribution detection. We show EA-VAEs may prove useful both as models of perception in computational neuroscience and as inference tools in computer vision.