Everything You Need To Know About Meta’s Code-Llama!
<p><a href="https://arxiv.org/pdf/2308.12950.pdf" rel="noopener ugc nofollow" target="_blank">Code-LLama</a> is a family of LLMs based LLama2 dedicated for coding tasks. It already comes with a set of improvement and differences from previous Coding LLMs.</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*7v02Yhh6Y_xSqS3wU1hSfQ.png" style="height:700px; width:700px" /></p>
<p>Coding Llama picture generated by <a href="https://replicate.com/stability-ai/sdxl?" rel="noopener ugc nofollow" target="_blank">Replicate</a></p>
<h1>Introducing The Family Members</h1>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*0wXBmrJYzHnTvIupJL_TeQ.png" style="height:192px; width:700px" /></p>
<p>Code Llama Specialization Pipeline</p>
<h2><strong>Code Llama</strong></h2>
<p>The foundation models for code generation, they come in 3 sizes, 7B, 13B and 34B. The 7B and 13B are trained using infilling objective making them suitable for IDE usage.</p>
<p>All these models are initialized with Llama-2 weights and trained on 500B tokens of code data. They are also trained on long context data.</p>
<h2>Code Llama — Python</h2>
<p>Specialized in Python, they come also in 7B, 13B and 34B. They are designed to study the differences between a single programming language model versus a more general coding model. The Python model family builds on top of Code Llama models by training on 100B tokens. They are trained without infilling but they are able to handle long context.</p>
<p><a href="https://azizbelaweid.medium.com/everything-you-need-to-know-about-metas-code-llama-c94f77de3644"><strong>Learn More</strong></a></p>