Tag: Finetuning

RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application?

Prologue As the wave of interest in Large Language Models (LLMs) surges, many developers and organisations are busy building applications harnessing their power. However, when the pre-trained LLMs out of the box don’t perform as expected or hoped, the question on how to improve the performa...

A Beginner’s Guide to LLM Fine-Tuning

The growing interest in Large Language Models (LLMs) has led to a surge in tools and wrappers designed to streamline their training process. Popular options include FastChat from LMSYS (used to train Vicuna) and Hugging Face’s transformers/trl libraries (used i...

Fine Tuning LLM: Parameter Efficient Fine Tuning (PEFT) — LoRA & QLoRA — Part 2

In this blog, we will implement LoRA the idea behind Parameter Efficient Fine Tuning (PEFT), and explore LoRA and QLoRA, Two of the most important PEFT methods. We will also be exploring “Weights and Biases” for capturing the training metrics. We will be fine-tuning a small Salesforce co...