GNT external seminar series

Latent Variable Models of Sensory Representations

Informations pratiques
23 mai 2024

ENS, Salle U209, 29 rue d'Ulm, 75005 Paris



Brains and neuroscientists ironically face a similar challenge: to extract and explain structure in high dimensional, unlabelled and noisy observations. As such, unsupervised probabilistic approaches are valuable to analyse increasingly large datasets of neural recordings but also as models of perception and learning. In particular, they offer principle ways to reason about the structure of the world, they are robust to noise and they allow to include prior knowledge about the observations.In this talk, I present two such methods developed throughout my doctoral studies. First, I describe an extension of tensor decomposition methods that finds a low-rank factorization of neural spike-counts arranged in multidimensional arrays. Applied to neural recordings from a visual-vestibular sensory integration task, the model segregates the influences of temporal dynamics and experimental condition on population activity. Then, I introduce a new class of latent variable models for representational learning, which bypasses the need to parametrize observation generation: recognition parametrised models. I conclude with a proof of concept on publicly available neural recordings for studying visual features organizations in Inferior Temporal cortex.