Séminaire

Invariant and hierarchical computation in human auditory cortex

Informations pratiques
27 septembre 2018
14h
Lieu

Salles des Actes, ENS, 45 rue d'Ulm

LSP

Just by listening, humans infer a host of useful information about events in the world. Much is known about peripheral auditory processing, but auditory cortex remains poorly understood, particularly in computational terms. Here I will talk about my work exploring computational properties of cortical responses, through the lens of invariance.

I will first discuss how we developed an improved model of cortical responses by optimizing a hierarchical convolutional neural network to perform multiple real-world invariant recognition tasks. Despite not being trained to fit behavioral or neural data, this task-optimized neural network replicated human auditory behavior, predicted neural responses, and revealed a cortical processing hierarchy, with distinct network layers mimicking response in distinct parts of auditory cortex. (You can read about this work here.)

In part motivated by analyses probing the network’s representations, I will then describe our work studying the neural basis of a central challenge in everyday listening: hearing sources of interest embedded in real-world background noises (e.g., a bustling coffee shop, crickets chirping). We use fMRI to compare voxel responses to natural sounds with and without real-world background noise, finding that non-primary auditory cortex is substantially more noise-robust than primary areas. These results demonstrate a representational consequence of auditory hierarchical organization.

Together, this work suggests a multi-staged hierarchy of auditory cortical computation, and it begins to characterize properties of those computations.