GNT seminar series

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

Guillaume Bellec (Wolfgang Maass' lab)
Practical information
18 April 2019

ENS, room 235B, 24 rue Lhomond, 75005 Paris


The way how recurrently connected networks of spiking neurons in the
brain acquire powerful information processing capabilities through
learning has remained a mystery. This lack of understanding is linked
to a lack of learning algorithms for recurrent networks of spiking
neurons (RSNNs) that are both functionally powerful and can be
implemented by known biological mechanisms. The gold standard for
learning in recurrent neural networks in machine learning is
back-propagation through time (BPTT), which implements stochastic
gradient descent with regard to a given loss function. But BPTT is
unrealistic from a biological perspective, since it requires a
transmission of error signals backwards in time and in space. We show
that an online merging of locally available information during a
computation with suitable top-down learning signals in real-time
provides highly capable approximations to BPTT. For tasks where
information on errors arises only late during a network computation,
we enrich locally available information through feedforward
eligibility traces of synapses that can easily be computed in an
online manner. The resulting new generation of learning algorithms for
recurrent neural networks provides a new understanding of network
learning in the brain that can be tested experimentally.