LLM Economics: ChatGPT vs Open-Source

<p>TLDR:&nbsp;For lower usage in the 1000&rsquo;s of requests per day range ChatGPT works out cheaper than using open-sourced LLMs deployed to AWS. For millions of requests per day, open-sourced models deployed in AWS work out cheaper.&nbsp;(As of writing this article on April 24th, 2023.)</p> <p>Large Language Models are taking the world by storm. Transformers were introduced in 2017, followed by breakthrough models like BERT, GPT, and BART &mdash; 100&rsquo;s of millions of parameters; and capable to performing multiple Language tasks like sentiment analysis, Q&amp;A, classification, etc.</p> <p>A couple of years ago &mdash; researchers from OpenAI and Google documented multiple papers showing that large language models with more than 10&rsquo;s of Billions of parameters started showing emergent capabilities where they seemingly understand complex aspects of language and are almost human-like in their responses.</p> <p><a href="https://towardsdatascience.com/llm-economics-chatgpt-vs-open-source-dfc29f69fec1"><strong>Click Here</strong></a></p>
Tags: LLM Economics