Demystifying Neural Networks: A Simple Explanation Using Linear Algebra and Geometry

<p>Neural networks have become ubiquitous in our lives, but their inner workings are still baffling even to many practitioners. In this post, I&rsquo;ll explain how&nbsp;<a href="https://en.wikipedia.org/wiki/Feedforward_neural_network" rel="noopener ugc nofollow" target="_blank">Feedforward Neural Networks</a>&nbsp;conceptually work using just basic linear algebra and geometry.</p> <p>At their core, these networks learn to partition space of labelled input into regions associated with specific output classes. They do this by applying two key operations: linear transformations and non-linear activations/distortions.</p> <p>Let&rsquo;s walk through a simplified example. Say we have 3 classes of labelled data, and our input data has 2 features (X1, X2). We can visualize this as points in 2D space. Our goal is to divide this space so new points with 2D coordinates fall into the correct 3 regions and be classified.</p> <p><a href="https://medium.com/@e.bandari/demystifying-neural-networks-a-simple-explanation-using-linear-algebra-and-geometry-b0b18f5b2bd9"><strong>Website</strong></a></p>