Tempotron


The Tempotron is a supervised synaptic learning algorithm which is applied when the information is encoded in spatiotemporal spiking patterns. This is an advancement of the perceptron which does not incorporate a spike timing framework.
It is general consensus that spike timing plays a crucial role in the development of synaptic efficacy for many different kinds of neurons Therefore a large variety of STDP-rules has been developed one of which is the tempotron.

Algorithm

Assuming a leaky integrate-and-fire-model the potential of the synapse can be described by
where denotes the spike time of the i-th afferent synapse with synaptic efficacy and the resting potential. describes the postsynaptic potential elicited by each incoming spike:
with parameters and denoting decay time constants of the membrane integration and synaptic currents. The factor is used for the normalization of the PSP kernels. When the potential crosses the firing threshold the potential is reset to its resting value by shunting all incoming spikes.
Next, a binary classification of the input patterns is needed. In the beginning, the neuron does not know which pattern belongs to which classification and has to learn it iteratively, similar to the perceptron. The tempotron learns its tasks by adapting the synaptic efficacy. If a pattern is presented and the postsynaptic neuron did not spike, all synaptic efficacies are increased by whereas a pattern followed by a postsynaptic response leads to a decrease of the synaptic efficacies by with
Here denotes the time at which the postsynaptic potential reaches its maximal value.
It should be mentioned that the Tempotron is a special case of an older paper which dealt with continuous inputs.