Jaurès, 29 rue d'Ulm
Just by listening, humans can determine who is talking to them, whether a window in their house is open or shut, or what their child dropped on the floor in the next room. This ability to derive information from sound is enabled by a cascade of neuronal processing stages that transform the sound waveform entering the ear into cortical representations that are presumed to make behaviorally important sound properties explicit. Although much is known about the peripheral processing of sound, the auditory cortex is less understood, particularly in computational terms, and particularly in humans. This talk will describe our recent efforts to develop and test models of auditory cortical computation. I will describe our attempts to improve on existing models using deep neural networks trained to recognize speech and music, the development of new methods to test these models, and the use of these models to delineate function within auditory cortex.
To meet Josh McDermott, please contact Yves Boubenec : firstname.lastname@example.org