Tag: PEFT

Parameter-Efficient Fine-Tuning (PEFT) for LLMs: A Comprehensive Introduction

Large Language Models (LLMs) are quite large by name. These models usually have anywhere from 7 to 70 billion parameters. To load a 70 billion parameter model in full precision would require 280 GB of GPU memory! To train that model you would update billions of tokens over millions or billions of do...

Fine Tuning LLM: Parameter Efficient Fine Tuning (PEFT) — LoRA & QLoRA — Part 2

In this blog, we will implement LoRA the idea behind Parameter Efficient Fine Tuning (PEFT), and explore LoRA and QLoRA, Two of the most important PEFT methods. We will also be exploring “Weights and Biases” for capturing the training metrics. We will be fine-tuning a small Salesforce co...

Dive Into LoRA Adapters

Large Language Models (LLMs) have taken the world by storm. Over the last year we have witnessed a massive leap in what they can do, going from quite narrow and restricted applications to now engaging in fluent, multi-turn conversations. Isn’t it amazing how these models have shifted from e...