Can We Stop LLMs from Hallucinating?

<p>While Large Language Models (LLMs) have captured the attention of nearly everyone, wide-scale deployment of such technology is slightly limited due to a rather annoying aspect of it &mdash; these models tend to hallucinate. In simple terms, they sometimes just make things up, and worst of all, it often looks highly convincing.</p> <p>Hallucinations, frequent or not, bring with them two major issues. They can&rsquo;t be directly implemented in many sensitive or brittle fields where a single mistake can be highly costly. In addition, it sows general distrust as users are expected to verify everything coming out of an LLM, which, at least in part, defeats the purpose of such technology.</p> <p>Academia seems to also think that hallucinations are a major problem, as there are dozens of research papers in 2023 discussing and attempting to solve the issue.&nbsp;I, however,&nbsp;<a href="https://www.youtube.com/watch?v=mViTAXCg1xQ&amp;feature=youtu.be" rel="noopener ugc nofollow" target="_blank">would tend to agree with Yann LeCun</a>, Meta&rsquo;s Chief AI Scientist, that the hallucinations are not resolvable at all. We would need a complete revamp of the technology to eliminate the issue.</p> <h1>Hallucinating false statements</h1> <p>There are two important aspects to any LLM which, I think, make hallucinations unsolvable. Starting with the rather obvious technological underpinning, LLMs, like any other machine learning model, are stochastic in nature. In simple terms, they make predictions.</p> <p>While they&rsquo;re certainly much more advanced than &ldquo;glorified autocomplete,&rdquo; the underlying technology still uses statistical predictions about tokens. It&rsquo;s both one of the strengths and weaknesses of LLMs.</p> <p><a href="https://towardsdatascience.com/can-we-stop-llms-from-hallucinating-17c4ebd652c6"><strong>Read More</strong></a></p>