Steering LLMs with Prompt Engineering
<p>Large Language Models (LLMs) have captured our attention and imagination in the past six months since the announcement of ChatGPT. However, LLMs’ behaviors are often stochastic in nature, making it difficult for them to be integrated into a business application with well-defined limits. In this article, we will explore some ways of making LLMs more predictable and controllable through prompt engineering.</p>
<blockquote>
<p><em>“So, what is prompt engineering?”</em></p>
</blockquote>
<p>In a sense, if you have used ChatGPT, you were engaging in prompt engineering. As we ask the GPT3.5/4 (that’s the LLM behind ChatGPT… if you’re reading this in 2023) a question, then many times providing it with additional follow-up information, we are essentially prompting the LLM to produce a downstream answer that we find useful. The great thing about this process is that it comes naturally to us, like holding a back-and-forth conversation with another human, except with an AI/LLM instead. In short, prompt engineering is basically the methodology of using the appropriate prompts to produce the desired response back from the LLM.</p>
<p>But, what if we want to do this programmatically? What if we can design our prompts beforehand and then use them to steer or control the LLM response?</p>
<p><a href="https://betterprogramming.pub/steering-llms-with-prompt-engineering-dbaf77b4c7a1"><strong>Learn More</strong></a></p>