In this and the next posts, I will walk you through the fine-tuning process for a Large Language Model (LLM) or a Generative Pre-trained Transformer (GPT). There are two prominent fine-tuning methods. One is Prefix-tuning and the other is LoRA (Low-Rank Adaptation of Large Language Models). This post explains Prefix-tuning and the next post “Fine-tuning a GPT — LoRA” for LoRA. In both posts, I will cover a code example and walk you through the code line by line. In the LoRA article, I will especially cover the GPU-consuming nature of fine-tuning a Large Language Model (LLM)
It???s the End of My Ambition & I Feel Fine
Just because you’re good at something doesn’t mean you should be doing that something. Especially if that something sends you screaming into pillows. As…