Model Adaptation & Fine Tuning + Optimal Generative AI Model Deployment: Part #2
<h1>Continued From Part #1….</h1>
<p>Now, a big challenge with this, that would transition to the next part of this blog is that for vast LLMs, for example for our BLOOM model to train and teach students different subjects, using such evaluation metrics are not enough. Now, we will use the most efficient training techniques for our model, Using PEFT, followed by setting some benchmarks for our model, and then establishing a model improvement pipeline. We will then dive into model deployments and how we can make this highly performant and cost optimal using AWS Inferentia2 and AWS Trainium on Multi Model Endpoints on none other than, Amazon SageMaker.</p>
<h1>Parameter Efficient Fine-Tuning Our BLOOM Model</h1>
<p>From part 1, if you remember, a model’s memory is not only dependent on the data you will save on it but the follows:</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*pwYm4-z4o361JRRzORlcjQ.png" style="height:468px; width:700px" /></p>
<p>It will take in the number of parameters, that will increase as the model scales and evaluates itself for better performance, gradients, weights, etc. Now that we know we do not want to instruction fine tune our model, an efficient way to preserve performance and memory <strong><em>while training instead of tuning the entire model is to train our BLOOM model using parameter efficient fine tuning techniques</em></strong>.</p>
<p>→ Here, we can keep all of the model weights frozen, add a layer of training weights (that are the only parameters which will be tuned in the process of training our model), and lead to an updated model with not more than a couple of MBs fine tuned for our task. Instead of using instructions for every task, we can add a new trainable set of parameters and train our model based on that — for example,</p>
<ol>
<li>We can add a trainable set of weights for language understanding and examples to teach basic language vocabulary.</li>
<li>In the same way, we can add examples and data from historic, mathematics data sets alone with examples of the best ways to teach, but not bound by it, so the model can use reinforcement learning to build on top of what is best for the user, and be the most optimal for the users’ learning progressions and interactivity in development.</li>
</ol>
<p><a href="https://medium.com/@madhur.prashant7/model-adaptation-fine-tuning-optimal-generative-ai-model-deployment-part-2-ae54f26caad9"><strong>Click Here</strong></a></p>