What Does It Really Mean for an Algorithm to Learn?
<p>When one first encounters machine learning, one often rushes through algorithm after algorithm, technique after technique, equation after equation. But it is afterwards that one can reflect on the general trends across the knowledge that they have acquired.</p>
<p>What it means to ‘learn’ is a very abstract concept. The goal of this article is to provide two general interpretations of what it means for a machine to learn. These two interpretations are, as we will see, two sides of the same coin, and they are treated ubiqitously across machine learning.</p>
<p>Even if you are experienced in machine learning, you may gain something from temporarily stepping away from specific mechanics and considering the concept of learning at an abstract level.</p>
<p>There are broadly two key interpretations of learning in machine learning, which we will term <strong><em>loss-directed parameter update</em></strong> and <strong><em>manifold mapping</em></strong>. As we will see, they have substantive connections to psychology and philosophy of mind.</p>
<h1>Loss-Directed Parameter Update</h1>
<p>Some of the machine learning algorithms previously discussed adopt a <strong><em>tabula-rasa</em></strong> approach: they begin from a ‘blank slate’ random guess and iteratively improve their guess. This paradigm seems intuitive to us: when we’re trying to acquire a new skill, like learning to ride a bike or to simplify algebraic expressions, we make many mistakes and just get better ‘with practice’. However, from an algorithmic perspective, we need to explicitly recognize the presence of two entities: a <strong><em>state</em></strong> and a <strong><em>loss</em></strong>.</p>
<p><a href="https://towardsdatascience.com/what-does-it-really-mean-for-an-algorithm-to-learn-1f3e5e8d7884">Website</a></p>