Long short-term memory
Long short-term memory is an artificial recurrent neural network architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points, but also entire sequences of data. For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs.
A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.
LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the vanishing gradient problem that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications.
History
1997: LSTM was proposed by Sepp Hochreiter and Jürgen Schmidhuber. By introducing Constant Error Carousel units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.1999: Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins introduced the forget gate into LSTM architecture,
enabling the LSTM to reset its own state.
2000: Gers & Schmidhuber & Cummins added peephole connections into the architecture. Additionally, the output activation function was omitted.
2009: An LSTM based model won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team lead by Alex Graves. One was the most accurate model in the competition and another was the fastest.
2013: LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.
2014: Kyunghyun Cho et al. put forward a simplified variant called Gated recurrent unit.
2015: Google started using an LSTM for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%.
2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.
Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype in the iPhone and for Siri.
Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.
2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.
Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining conference. Their study describes a novel neural network that performs better on certain data sets than the widely used long short-term memory neural network.
Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".
2019: Researchers from the University of Waterloo proposed a related RNN architecture which represents continuous windows of time. It was derived using the Legendre polynomials and outperforms the LSTM on some memory-related benchmarks.
An LSTM model climbed to third place on the in Large Text Compression Benchmark.
Idea
In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem of vanilla RNNs is computational in nature: when training a vanilla RNN using back-propagation, the gradients which are back-propagated can "vanish" or "explode", because of the computations involved in the process, which use finite-precision numbers. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow unchanged. However, LSTM networks can still suffer from the exploding gradient problem.Architecture
There are several architectures of LSTM units. A common architecture is composed of a cell and three "regulators", usually called gates, of the flow of information inside the LSTM unit: an input gate, an output gate and a forget gate. Some variations of the LSTM unit do not have one or more of these gates or maybe have other gates. For example, gated recurrent units do not have an output gate.Intuitively, the cell is responsible for keeping track of the dependencies between the elements in the input sequence. The input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit. The activation function of the LSTM gates is often the logistic sigmoid function.
There are connections into and out of the LSTM gates, a few of which are recurrent. The weights of these connections, which need to be learned during training, determine how the gates operate.
Variants
In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate, output gate, the forget gate or the memory cell, depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one cell of one LSTM unit, but contains LSTM unit's cells.LSTM with a forget gate
The compact forms of the equations for the forward pass of an LSTM unit with a forget gate are:where the initial values are and and the operator denotes the Hadamard product. The subscript indexes the time step.
Variables
- : input vector to the LSTM unit
- : forget gate's activation vector
- : input/update gate's activation vector
- : output gate's activation vector
- : hidden state vector also known as output vector of the LSTM unit
- : cell input activation vector
- : cell state vector
- , and : weight matrices and bias vector parameters which need to be learned during training
[Activation function]s
- : sigmoid function.
- : hyperbolic tangent function.
- : hyperbolic tangent function or, as the peephole LSTM paper suggests,.
Peephole LSTM
Peephole convolutional LSTM
Peephole convolutional LSTM. The denotes the convolution operator.\begin
f_t &= \sigma_g \\
i_t &= \sigma_g \\
c_t &= f_t \circ c_ + i_t \circ \sigma_c \\
o_t &= \sigma_g \\
h_t &= o_t \circ \sigma_h
\end
Training
An RNN using LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, like gradient descent, combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error with respect to corresponding weight.A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.
However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.
CTC score function
Many applications use stacks of LSTM RNNs and train them by connectionist temporal classification to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.Alternatives
Sometimes, it can be advantageous to train an LSTM by neuroevolution or by policy gradient methods, especially when there is no "teacher".Success
There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units.In 2018, Bill Gates called it a “huge milestone in advancing artificial intelligence” when bots developed by OpenAI were able to beat humans in the game of Dota 2. OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads.
In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.
In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft II. This was viewed as significant progress towards Artificial General Intelligence.
Applications
Applications of LSTM include:- Robot control
- Time series prediction
- Speech recognition
- Rhythm learning
- Music composition
- Grammar learning
- Handwriting recognition
- Human action recognition
- Sign language translation
- Protein homology detection
- Predicting subcellular localization of proteins
- Time series anomaly detection
- Several prediction tasks in the area of business process management
- Prediction in medical care pathways
- Semantic parsing
- Object co-segmentation
- Airport passenger management
- Short-term traffic forecast