Parameter-Efficient Fine-Tuning (PEFT) for LLMs: A Comprehensive Introduction

<p>Large Language Models (LLMs) are quite large by name. These models usually have anywhere from 7 to 70 billion parameters. To load a 70 billion parameter model in full precision would require 280 GB of GPU memory! To train that model you would update billions of tokens over millions or billions of documents. The computation required is substantial for updating those parameters. The self-supervised training of these models is expensive,&nbsp;<a href="https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/" rel="noopener ugc nofollow" target="_blank">costing companies up to $100 million</a>.</p> <p>For the rest of us, there is significant interest in adapting our data to these models. With our limited datasets (in comparison) and lacking computing power, how do we create models that can improve on the major players at a fraction of the cost?</p> <p>This is where the research field of Parameter-Efficient Fine-Tuning (PEFT) comes into play. Through various techniques, which we will soon explore in detail, we can augment small sections of these models so they are better suited to the tasks we aim to complete.</p> <p>After reading this article, you will conceptually grasp each PEFT technique applied in Hugging Face and be able to distinguish the differences between them. One of the most helpful overviews I found before this article was from a&nbsp;<a href="https://www.reddit.com/r/MachineLearning/comments/14pkibg/d_is_there_a_difference_between_ptuning_and/jqkdam8/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button" rel="noopener ugc nofollow" target="_blank">Reddit comment</a>. There&rsquo;s also another&nbsp;<a href="https://lightning.ai/pages/community/article/understanding-llama-adapters/" rel="noopener ugc nofollow" target="_blank">exceptional article</a>&nbsp;available from lightning.ai (the creators of pytorch lightning.) Additionally,</p> <p><a href="https://towardsdatascience.com/parameter-efficient-fine-tuning-peft-for-llms-a-comprehensive-introduction-e52d03117f95"><strong>Click Here</strong></a></p>