Applying LLMs to Enterprise Data: Concepts, Concerns, and Hot-Takes
<p>Ask GPT-4 to prove there are infinite prime numbers — while rhyming — and it delivers. But ask it how your team performed vs plan last quarter, and it will fail miserably. This illustrates a fundamental challenge of large language models (“LLMs”): they have a good grasp of general, public knowledge (like prime number theory), but are entirely unaware of proprietary, non-public information (how your team did last quarter.)[1] And proprietary information is critical to the vast majority of enterprise use workflows. A model that understands the public internet is cute, but little use in its raw form to most organizations.</p>
<p>Over the past year, I’ve had the privilege of working with a number of organizations applying LLMs to enterprise use cases. This post details key concepts and concerns that anyone embarking on such a journey should know, as well as a few hot-takes on how I think LLMs will evolve and implications for ML product strategy. It’s intended for product managers, designers, engineers and other readers with limited or no knowledge of how LLMs work “under the hood”, but some interest in learning the concepts without going into technical details.</p>
<p><a href="https://towardsdatascience.com/applying-llms-to-enterprise-data-concepts-concerns-and-hot-takes-e19ded4bde88">Click Here</a></p>