The was the LSTM's leaner, faster cousin. It did away with the extra "cell state" and merged the gates, making it quicker to train while keeping the memory sharp. The Success
He sat at his terminal and summoned the nn.RNN module. Unlike the Feed-Forward giants of the past, this model had a —a tiny notebook where it scribbled down secrets from the previous timestamp to pass them to the next. The Loop of Memory Deep Learning: Recurrent Neural Networks in Pyt...
Leo fed the RNN a sequence of words. At each step, the RNN would: Take the (the new word). Read its hidden state (its memory of the past). Combine them into a new understanding. Pass that updated memory to its future self. The was the LSTM's leaner, faster cousin