In this blog, I will guide you through the process of fine-tuning Meta’s Llama 2 7B model for news article categorization across 18 different categories. I will utilize a news classification instruction dataset that I previously created using GPT 3.5. If you’re interested in how I generated that dataset and the motivation behind this mini-project, you can refer to my earlier blog or notebook where I discuss the details.
The purpose of this notebook is to provide a comprehensive, step-by-step tutorial for fine-tuning any LLM (Large Language Model). Unlike many tutorials available, I’ll explain each step in a detailed manner, covering all classes, functions, and parameters used.
This guide will be divided into two parts:
Part 1: Setting up and Preparing for Fine-Tuning [This blog]
- Installing and loading the required modules
- Steps to get approval for Meta’s Llama 2 family of models
- Setting up Hugging Face CLI and user authentication
- Loading a pre-trained model and its associated tokenizer
- Loading the training dataset
- Preprocessing the training dataset for model fine-tuning
Part 2: Fine-Tuning and Open-Sourcing
- Configuring PEFT (Parameter Efficient Fine-Tuning) method QLoRA for efficient fine-tuning
- Fine-tuning the pre-trained model
- Saving the fine-tuned model and its associated tokenizer
- Pushing the fine-tuned model to the Hugging Face Hub for public usage
Note that running this on a CPU is practically impossible. If running on Google Colab, go to Runtime > Change runtime type. Change Hardware accelarator to GPU. Change GPU type to T4. Change Runtime shape to High-RAM.