Can We Stop LLMs from Hallucinating?
<p>While Large Language Models (LLMs) have captured the attention of nearly everyone, wide-scale deployment of such technology is slightly limited due to a rather annoying aspect of it — these models tend to hallucinate. In simple terms, they sometimes just make things up, and worst of all, it often looks highly convincing.</p>
<p>Hallucinations, frequent or not, bring with them two major issues. They can’t be directly implemented in many sensitive or brittle fields where a single mistake can be highly costly. In addition, it sows general distrust as users are expected to verify everything coming out of an LLM, which, at least in part, defeats the purpose of such technology.</p>
<p>Academia seems to also think that hallucinations are a major problem, as there are dozens of research papers in 2023 discussing and attempting to solve the issue. I, however, <a href="https://www.youtube.com/watch?v=mViTAXCg1xQ&feature=youtu.be" rel="noopener ugc nofollow" target="_blank">would tend to agree with Yann LeCun</a>, Meta’s Chief AI Scientist, that the hallucinations are not resolvable at all. We would need a complete revamp of the technology to eliminate the issue.</p>
<p><a href="https://towardsdatascience.com/can-we-stop-llms-from-hallucinating-17c4ebd652c6"><strong>Visit Now</strong></a></p>