Can We Stop LLMs from Hallucinating?
<p>While Large Language Models (LLMs) have captured the attention of nearly everyone, wide-scale deployment of such technology is slightly limited due to a rather annoying aspect of it — these models tend to hallucinate. In simple terms, they sometimes just make things up, and worst of all, it often looks highly convincing.</p>
<p>Hallucinations, frequent or not, bring with them two major issues. They can’t be directly implemented in many sensitive or brittle fields where a single mistake can be highly costly. In addition, it sows general distrust as users are expected to verify everything coming out of an LLM, which, at least in part, defeats the purpose of such technology.</p>
<p>Academia seems to also think that hallucinations are a major problem, as there are dozens of research papers in 2023 discussing and attempting to solve the issue. I, however, <a href="https://www.youtube.com/watch?v=mViTAXCg1xQ&feature=youtu.be" rel="noopener ugc nofollow" target="_blank">would tend to agree with Yann LeCun</a>, Meta’s Chief AI Scientist, that the hallucinations are not resolvable at all. We would need a complete revamp of the technology to eliminate the issue.</p>
<h1>Hallucinating false statements</h1>
<p>There are two important aspects to any LLM which, I think, make hallucinations unsolvable. Starting with the rather obvious technological underpinning, LLMs, like any other machine learning model, are stochastic in nature. In simple terms, they make predictions.</p>
<p>While they’re certainly much more advanced than “glorified autocomplete,” the underlying technology still uses statistical predictions about tokens. It’s both one of the strengths and weaknesses of LLMs.</p>
<p><a href="https://towardsdatascience.com/can-we-stop-llms-from-hallucinating-17c4ebd652c6"><strong>Read More</strong></a></p>