The Art of Prompt Design: Use Clear Syntax

<p>This is the first installment of a series on how to use&nbsp;<code><a href="https://github.com/microsoft/guidance" rel="noopener ugc nofollow" target="_blank">guidance</a></code>&nbsp;to control large language models (LLMs), written jointly with&nbsp;<a href="https://medium.com/@marcotcr" rel="noopener">Marco Tulio Ribeiro</a>. We&rsquo;ll start from the basics and work our way up to more advanced topics.</p> <p>In this post, we&rsquo;ll show that having&nbsp;<strong>clear syntax</strong>&nbsp;enables you to communicate your intent to the LLM, and also ensure that outputs are easy to parse (like JSON that is guaranteed to be valid). For the sake of clarity and reproducibility we&rsquo;ll start with an open source StableLM model without fine tuning. Then, we will show how the same ideas apply to fine-tuned models like ChatGPT / GPT-4. All the code below is&nbsp;available in a notebook&nbsp;for you to reproduce if you like.</p> <h1><strong>Clear syntax helps with parsing the output</strong></h1> <p>The first, and most obvious benefit of using clear syntax is that it makes it easier to parse the output of the LLM. Even if the LLM is able to generate a correct output, it may be difficult to programatically extract the desired information from the output. For example, consider the following Guidance prompt (where&nbsp;<code>{{gen &#39;answer&#39;}}</code>&nbsp;is a&nbsp;<code>guidance</code>&nbsp;command to generate text from the LLM)</p> <p><a href="https://towardsdatascience.com/the-art-of-prompt-design-use-clear-syntax-4fc846c1ebd5">Read More</a></p>