Recurrent Neural Networks, Explained and Visualized from the Ground Up
<p>Recurrent Neural Networks (RNNs) are neural networks that can operate sequentially. Although they’re not as popular as they were even just several years ago, they represent an important development in the progression of deep learning and are a natural extension of feedforward networks.</p>
<p>In this post, we’ll cover the following:</p>
<ul>
<li>The step from feedforward to recurrent networks</li>
<li>Multilayer recurrent networks</li>
<li>Long short-term memory networks (LSTMs)</li>
<li>Sequential output (‘text output’)</li>
<li>Bidirectionality</li>
<li>Autoregressive generation</li>
<li>An application to machine translation (a high-level understanding of Google Translate’s 2016 model architecture)</li>
</ul>
<p>The aim of the post is not only to explain how RNNs work (there are plenty of posts which do that), but to explore their design choices and high-level intuitive logic with the aid of illustrations. I hope this article will provide some unique value not only to your grasp of this particular technical topic but also more generally the flexibility of deep learning design.</p>
<p><a href="https://towardsdatascience.com/recurrent-neural-networks-explained-and-visualized-from-the-ground-up-51c023f2b6fe">Website</a></p>