12 Prompt Engineering Techniques

<p><em>I&rsquo;m currently the&nbsp;</em><a href="https://www.linkedin.com/in/cobusgreyling" rel="noopener ugc nofollow" target="_blank"><em>Chief Evangelist</em></a><em>&nbsp;@&nbsp;</em><a href="https://www.humanfirst.ai/" rel="noopener ugc nofollow" target="_blank"><em>HumanFirst</em></a><em>. I explore &amp; write about all things at the intersection of AI &amp; language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces &amp; more.</em></p> <h1>Least-To-Most Prompting</h1> <p>The process of&nbsp;<strong><em>inference</em></strong>&nbsp;is reaching a conclusion based on evidence and reasoning. And in turn reasoning can be engendered with LLMs by providing the LLM with a few examples on how to reason and use evidence.</p> <p>Hence a novel prompting strategy was developed, named&nbsp;<em>least-to-most prompting.&nbsp;</em>This method is underpinned by the following strategy:</p> <ol> <li>Decompose a complex problem into a series of simpler sub-problems.</li> <li>And subsequently solving for each of these sub-questions.</li> </ol> <p>Solving each subproblem is facilitated by the answers to previously solved subproblems.</p> <p>Hence least to most prompting is a technique of using a progressive sequence of prompts to reach a final conclusion.</p> <p><a href="https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa"><strong>Read More</strong></a></p>