Generative AI — LLMOps Architecture Patterns
<p>Deploying Large Language Models (LLMs) in the Enterprise</p>
<p>The recent announcement that Open AI is considering launching an App Store for AI Apps (<a href="https://www.reuters.com/technology/openai-plans-app-store-ai-software-information-2023-06-20/" rel="noopener ugc nofollow" target="_blank">link</a>) is quite interesting. It is to be expected that other major players, e.g., Google, AWS, Hugging Face, will follow suit. While the motive is clear, i.e., to become the preferred platform for Generative AI (GenAI) / Large Language Model (LLM) adoption; there is also a risk that an enterprise app published on the platform will overshadow the underlying platform.</p>
<blockquote>
<p>It remains to be seen if the AI Apps Store will turn out be as much of a game changer as the Apple App Store was for iPhone / Mobile devices — interesting times ahead!</p>
</blockquote>
<p>It also got me thinking as to what are the LLM deployment options today for an Enterprise. I can think of at least 4 deployment architectures:</p>
<ol>
<li>Black-box LLM APIs</li>
<li>Enterprise Apps in LLM App Store</li>
<li>Enterprise LLMOps — LLM fine-tuning</li>
<li>Multi-agent LLM Orchestration</li>
</ol>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*qsDpstcc_scf_LDlCcpH-A.png" style="height:1172px; width:700px" /></p>
<p>Fig: LLMOps deployment architecture patterns</p>
<p>Some observations reg. the 4 deployment patterns:</p>
<h2>1. Black-box LLM APIs</h2>
<p>This is your classic ChatGPT [1] example, where we have black-box access to a LLM API/UI. Similar LLM APIs can be considered for other Natural Language Processing (NLP) core tasks, e.g., Knowledge Retrieval, Summarization, Auto-Correct, Translation, Natural Language Generation (NLG).</p>
<p><a href="https://medium.datadriveninvestor.com/generative-ai-llmops-deployment-architecture-patterns-6d45d1668aba"><strong>Click Here</strong></a></p>