Spiking neural network


Spiking neural networks are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle, but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
In the context of spiking neural networks, the current activation level is normally considered to be the neuron's state, with incoming spikes pushing this value higher, eventually either firing or decaying. Various coding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes, or the interval between spikes, to encode information.

History

Artificial neural networks are usually fully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs in many fields, they are biologically inaccurate and do not mimic the operation mechanism of neurons in the brain of a living thing.
The biologically-inspired Hodgkin–Huxley model of a spiking neuron was proposed in 1952. This model describes how action potentials are initiated and propagated. Communication between neurons, which requires the exchange of chemical neurotransmitters in the synaptic gap, is described in various models, such as the integrate-and-fire model, FitzHugh–Nagumo model, and Hindmarsh–Rose model.
On July 2019 at the DARPA Electronics Resurgence Initiative summit, Intel unveiled an 8-million-neuron neuromorphic system comprising 64 Loihi research chips.

Underpinnings

From the information theory perspective, the problem is to explain how information is encoded and decoded by a series of trains of pulses, i.e. action potentials. Thus, a fundamental question of neuroscience is to determine whether neurons communicate by a rate or temporal code. Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.
A spiking neural network considers temporal information. The idea is that not all neurons are activated in every iteration of propagation, but only when its membrane potential reaches a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, raising or lowering their membrane potential.
In a spiking neural network, the neuron's current state is defined as its level of activation. An input pulse causes the current state value to rise for a period of time and then gradually decline. Encoding schemes have been constructed to interpret these output pulse sequences as a number, taking into account both pulse frequency and pulse interval. A neural network model based on pulse generation time can be established accurately. Spike coding is adopted in this new neural network. Using the exact time of pulse occurrence, a neural network can employ more information and offer stronger computing power.
Pulse-coupled neural networks are often confused with SNNs. A PCNN can be seen as a kind of SNN.
The SNN approach uses a binary output instead of the continuous output of traditional ANNs. Further, pulse trainings are not easily interpretable. But pulse training increases the ability to process spatiotemporal data. Space refers to the fact that neurons connect only to nearby neurons so that they can process input blocks separately. Time refers to the fact that pulse training occurs over time so that the information lost in binary coding can be retrieved from the time information. This avoids the additional complexity of a recurrent neural network. It turns out that impulse neurons are more powerful computational units than traditional artificial neurons.
SNN is theoretically more powerful than second-generation networks, however SNN training issues and hardware requirements limit their use. Although unsupervised biological learning methods are available, such as Hebbian learning and STDP, no effective supervised training method is suitable for SNN that can provide better performance than second-generation networks. Spike based activation of SNNs is not differentiable thus making it hard to develop gradient descent based training methods to perform error backpropagation, though a few recent algorithms such as NormAD and multilayer NormAD have demonstrated good training performance through suitable approximation of the gradient of spike based activation.
SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs.

Applications

SNNs can in principle apply to the same applications as traditional ANNs. In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment. Due to their relative realism, they can be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, recordings of this circuit can be compared to the output of the corresponding SNN, evaluating the plausibility of the hypothesis. However, there is a lack of effective training mechanisms for SNNs, which can be inhibitory for some applications, including computer vision tasks.
As of 2019 SNNs lag ANNs in terms of accuracy, but the gap is decreasing, and has vanished on some tasks.

Software

A diverse range of application software can simulate SNNs. This software can be classified according to its uses:

SNN simulation

These simulate complex neural models with a high level of detail and accuracy. Large networks usually require lengthy processing. Candidates include:
Future neuromorphic architectures will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Experiemental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated reversal of domains. Simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning.
Classification capabilities of spiking networks trained according to unsupervised learning methods have been tested on the common benchmark datasets, such as, Iris, Wisconsin Breast Cancer or Statlog Landsat dataset. Various approaches to information encoding and network design have been used. For example, a 2-layer feedforward network for data clustering and classification. Based on the idea proposed in Hopfield the authors implemented models of local receptive fields combining the properties of radial basis functions and spiking neurons to convert input signals having a floating-point representation into a spiking representation.