ENS, Jaurès building, U205 room, 29 rue d'Ulm, 75005 Paris
Josh McDermott (MIT): "Computational auditory scene analysis as causal inference"
A central computational challenge of everyday hearing is the need to separate the distinct causes of sound in the world. The most commonly discussed version of this problem occurs with concurrent sound sources, often termed the ‘cocktail party problem’. However, analogous problems are posed by reverberation, in which the sound from a source interacts with the environment (via reflections) on its way to the ears, as well as by sound-generating object interactions, in which the physical properties of multiple objects jointly determine the sound. Dating back to Helmholtz, perceptual judgments have been considered the result of unconscious inference, in which our perceptual systems determine the most likely causes of sensory stimuli in terms of structures and events in the world. Despite the conceptual appeal of this view, perceptual inference has historically been difficult to instantiate in working computational systems for all but the simplest perceptual judgments. In this talk I will revisit the notion of scene analysis as inference, leveraging recent computational developments that make inference newly feasible and exploring neglected classes of everyday scene analysis problems along with classical auditory scene analysis.
Jennifer Bizley (UCL): "Invariance and Noise Tolerence in Auditory Cortex"
The ability to recognize sounds in noise is a key part of hearing. Yet we know little about the necessity of regions such as auditory cortex for hearing in noise, or how cortical processing of sounds is adversely affected by noise. Here we used reversible cortical inactivation and extracellular electrophysiology in ferrets performing a vowel discrimination task to identify and understand the causal contribution of auditory cortex to hearing in noise. Cortical inactivation by cooling impaired task performance in noisy but not clean conditions, while responses of auditory cortical neurons were less informative about vowel identity in noise. Simulations mimicking cortical inactivation indicated that effects of inactivation were related to the loss of information about sounds represented across neural populations. The addition of noise to target sounds drove spiking activity in auditory cortex and recruitment of additional neural populations that were linked to degraded behavioral performance. To suppress noise-related activity, we used continuous exposure to background noise to adapt the auditory system and recover behavioral performance in both ferrets and humans. Inactivation by cooling revealed that the benefits of continuous exposure were not cortically dependent. Together our results highlight the importance of auditory cortex in sound discrimination in noise and the underlying mechanisms through which noise related activity and adaptation shape hearing.