Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning.
Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (such as images), but also entire sequences of data (such as speech or video).
For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDS’s (intrusion detection systems).
: In this paper, the authors provide an ablation study of the main LSTM components (input gate, output gate, forget gate, peephole connections, activation functions) and show what each of these modules contribute to the final accuracy of LSTM. A must-read paper try dive deeper into LSTM architecture.