A Recurrent neural network (RNN) is a branch of the artificial neural network where connections between units form a directed cycle enabling it to exhibit dynamic temporal behaviour. It can use their internal memory to process arbitrary sequences of inputs. They are networks with loops in them, which allows information to persist. It can be considered as multiple copies of the same network. RNN considers all preceding input while determining an output value. In a standard NN, an input goes through a hidden layer and then is transformed into y but the output y not only consider current input x but takes into account every x that preceded it.
Lately, RNN has been a huge success in speech recognition, language modelling, translation, image captioning and many other fields. One of the major reason behind this success is the use of LSTM i.e. Long Short Term Memory networks which is a special kind of RNN that works with a better standard. Earlier RNN was unable to learn to connect the long-term information but as LSTM, a feedback network was introduced to overcome the fundamental problems of RNN.
Long Short Term Memory has long-term dependencies and works tremendously well on a large variety of problems. It prevents backpropagated errors from vanishing or exploding and it has the ability to learn "Very Deep Learning" tasks. LSTMs have a chain like structure, but the repeating module has a different structure there is four interacting layer. It has bought explicit change in the machine learning and Artificial Intelligence (AI). Now it is quite within reach of billions of users because major companies are using LSTM networks as fundamental components in new products. Apple uses it for the Quicktype function and for Siri, Amazon uses LSTM for Amazon Alexa and Google Home uses it for speech recognition on the smartphone and the smart assistant Allo along with Google Translate. Tensorflow and various other libraries like Theano, Torch, PyBrain provide tools for users to design the model with ease.
Recurrent Neural Networks