First Test of LlaMa2: Meta’s Leap in the World of Open-Source LLMs

<p><a href="https://investor.fb.com/home/default.aspx" rel="noopener ugc nofollow" target="_blank">Meta&rsquo;s</a>&nbsp;launch of&nbsp;<a href="https://ai.meta.com/llama/" rel="noopener ugc nofollow" target="_blank">LlaMa2</a>, an advanced open-source&nbsp;<a href="https://machinelearningmastery.com/what-are-large-language-models/" rel="noopener ugc nofollow" target="_blank">LLM</a>&nbsp;(Large Language Model), has created excitement amongst AI enthusiasts for a number of reasons. Meta&rsquo;s origin as a social media platform notwithstanding, the release of LlaMa2 has solidly positioned it as an AI powerhouse fueling competition against Open AI. It holds a strong potential to mold the future picture of LLMs.</p> <h1>An Overview of the Test Set</h1> <p>A trio of sets featuring different token sizes &mdash; 100, 500, and 1000 &mdash; made up the assessment set. There were 10 questions assigned under each test set, each of which used a sentence from a product description as the basis for the question. Product labels served as the context for the 100 tokens set. The model was tasked to make a deduction or verdict based on the context provided. The responses from the model were then contrasted against a validation set. Only precise matches were accepted as valid.</p> <p>The model was directed to acquire values from the context, for a list of characteristics of the 500 and 1000 token sets. On matching it with the validation set, all characteristics were precisely derived before an answer was deemed acceptable.</p> <p><a href="https://medium.datadriveninvestor.com/testing-llama2-metas-leap-in-the-world-of-open-source-llms-74d72c9fe515"><strong>Learn More</strong></a></p>