Platypus: Quick, Cheap, and Powerful LLM
<p>A family of finetuned and merged models that reached the top positions of the Open LLM Leaderboard. How did they do it?</p>
<h1>How to reduce the cost of your model?</h1>
<p><img alt="Platypus: Quick, Cheap, and Powerful LLM" src="https://miro.medium.com/v2/resize:fit:630/0*ZAHsc8rXEV3jbnps" style="height:467px; width:700px" /></p>
<p>Photo by <a href="https://unsplash.com/@alexandermils?utm_source=medium&utm_medium=referral" rel="noopener ugc nofollow" target="_blank">Alexander Mils</a> on <a href="https://unsplash.com/?utm_source=medium&utm_medium=referral" rel="noopener ugc nofollow" target="_blank">Unsplash</a></p>
<p>In recent years, model parameters have exploded to a huge number of parameters (<a href="https://ai.google/discover/palm2/" rel="noopener ugc nofollow" target="_blank">540 B with PaLM</a>). The question that has been asked is whether this number of parameters is necessary.</p>
<p><a href="https://openai.com/research/scaling-laws-for-neural-language-models" rel="noopener ugc nofollow" target="_blank">According to OpenAI</a>, as models grow, there is an increase in performance. In addition, there is the appearance of emergent properties (properties that cannot be observed except at a certain scale).</p>
<p>This view has been challenged by the fact that actually more data, and thus scaling is limited by the number of tokens needed<a href="https://arxiv.org/abs/2203.15556" rel="noopener ugc nofollow" target="_blank"> to train a model optimally</a>. Moreover, even these emergent properties may not even exist.</p>
<p><a href="https://levelup.gitconnected.com/platypus-quick-cheap-and-powerful-llm-404b86af8755">Read More</a></p>