There are no emergent abilities in LLMs

<p>The 2022 paper titled &ldquo;Emergent Abilities of Large Language Models&rdquo; [1] has been the source of much speculation &mdash; and much hysteria &mdash; about the future of AI [2], [3]. The paper&rsquo;s central claim is that large language models (LLMs) display &ldquo;emergent abilities&rdquo; as they scale to larger and larger parameter counts (hundreds of billions of parameters are common model sizes now).</p> <p>In the paper, the authors define emergence as follows:</p> <blockquote> <p>&ldquo;We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models.&rdquo;</p> </blockquote> <p>In this article, I&rsquo;m going to demonstrate why these claims are flawed, and by the end of the article, I&rsquo;m certain you&rsquo;ll be convinced that unexpected abilities do not suddenly &ldquo;emerge&rdquo; in large language models. The code to reproduce my plots can be found in&nbsp;this Colab notebook.</p> <h1>The Source of the Claim</h1> <p>In the paper, the authors use the plots below as the basis for the claims of &ldquo;unpredictable&rdquo; and &ldquo;emergent abilities.&rdquo; Figure 2 from the paper is the basis of the claims.</p> <p>In this article, I&rsquo;ll scrutinize Figure 1 (shown below) from the paper, which is analogous (and virtually identical to) Figure 2.</p> <p><a href="https://betterprogramming.pub/there-are-no-emergent-abilities-in-llms-2bb42e17ce7e">Read More</a></p>
Tags: LLMs emergent