It is generally assumed that the brain’s computational capacities derive mostly from the structure of neural circuits—how it is wired—and from process(es) that rewire it in response to experience. The computationally relevant properties ascribed to the neuron itself have not changed in more than a century: It is a leaky integrator with a threshold on its output (Sherrington, 1906). The concepts at the core of molecular biology were undreamed of in Sherrington’s philosophy. They have transformed biological thinking in the last half century. But they play little role in theorizing about how nervous tissue computes. The possibility that the neuron is a full-blown computing machine in its own right, able to store acquired information and to perform complex computations on it, has barely been bruited. I urge us to consider it.
My reasons are: 1) The hypothesis that acquired information is stored in altered synapses is a conceptual dead end. In more than a century, no one has explained even in principle how altered synapses can carry information forward in time in a computationally accessible form. 2) It is easy to suggest several different models for how molecules known to exist inside cells can carry acquired information in a computationally accessible form. 3) The logic gates out of which all computation may be built are known to be implemented at the molecular level inside cells. Implementing memory and computation at the molecular level increases the speed (operations/s), energy efficiency (operations/J) and spatial efficiency (bits/m3) of computation and memory by many orders of magnitude. 5) Recent experimental findings strongly suggest that (at least some) memory resides inside the neuron.