Fine-tuning Llama 2 for news category prediction: A step-by-step comprehensive guide to…
<p>In this blog, I will guide you through the process of fine-tuning Meta’s <strong>Llama 2 7B</strong> model for news article categorization across 18 different categories. I will utilize a news classification instruction dataset that I previously created using <strong>GPT 3.5</strong>. If you’re interested in how I generated that dataset and the motivation behind this mini-project, you can refer to my earlier <a href="https://medium.com/@kshitiz.sahay26/how-i-created-an-instruction-dataset-using-gpt-3-5-to-fine-tune-llama-2-for-news-classification-ed02fe41c81f" rel="noopener">blog</a> or <a href="https://colab.research.google.com/drive/16rZ8DlvQp5YJED1ECUNbLKbu2YWLcLST?usp=sharing" rel="noopener ugc nofollow" target="_blank">notebook</a> where I discuss the details.</p>
<p>The purpose of this notebook is to provide a comprehensive, step-by-step tutorial for fine-tuning any LLM (Large Language Model). Unlike many tutorials available, I’ll explain each step in a detailed manner, covering all classes, functions, and parameters used.</p>
<p>This guide will be divided into two parts:</p>
<p><strong>Part 1: Setting up and Preparing for Fine-Tuning [This blog]</strong></p>
<ol>
<li>Installing and loading the required modules</li>
<li>Steps to get approval for Meta’s Llama 2 family of models</li>
<li>Setting up Hugging Face CLI and user authentication</li>
<li>Loading a pre-trained model and its associated tokenizer</li>
<li>Loading the training dataset</li>
<li>Preprocessing the training dataset for model fine-tuning</li>
</ol>
<p><strong>Part 2: Fine-Tuning and Open-Sourcing</strong></p>
<ol>
<li>Configuring PEFT (Parameter Efficient Fine-Tuning) method QLoRA for efficient fine-tuning</li>
<li>Fine-tuning the pre-trained model</li>
<li>Saving the fine-tuned model and its associated tokenizer</li>
<li>Pushing the fine-tuned model to the Hugging Face Hub for public usage</li>
</ol>
<p><em>Note that running this on a CPU is practically impossible. If running on Google Colab, go to Runtime > Change runtime type. Change Hardware accelarator to GPU. Change GPU type to T4. Change Runtime shape to High-RAM.</em></p>
<p><a href="https://medium.com/@kshitiz.sahay26/fine-tuning-llama-2-for-news-category-prediction-a-step-by-step-comprehensive-guide-to-deeccf3e3a88">Click Here</a></p>