Applying LLMs to Enterprise Data: Concepts, Concerns, and Hot-Takes

<p>Ask GPT-4 to prove there are infinite prime numbers &mdash; while rhyming &mdash;&nbsp;and it delivers. But ask it how your team performed vs plan last quarter, and it will fail miserably. This illustrates a fundamental challenge of large language models (&ldquo;LLMs&rdquo;): they have a good grasp of general, public knowledge (like prime number theory), but are entirely unaware of proprietary, non-public information (how your team did last quarter.)[1] And proprietary information is critical to the vast majority of enterprise use workflows. A model that understands the public internet is cute, but little use in its raw form to most organizations.</p> <p>Over the past year, I&rsquo;ve had the privilege of working with a number of organizations applying LLMs to enterprise use cases. This post details key concepts and concerns that anyone embarking on such a journey should know, as well as a few hot-takes on how I think LLMs will evolve and implications for ML product strategy. It&rsquo;s intended for product managers, designers, engineers and other readers with limited or no knowledge of how LLMs work &ldquo;under the hood&rdquo;, but some interest in learning the concepts without going into technical details.</p> <p><a href="https://towardsdatascience.com/applying-llms-to-enterprise-data-concepts-concerns-and-hot-takes-e19ded4bde88">Click Here</a></p>