Large Language Models Don’t “Hallucinate”
<p>You hear it everywhere. Whenever someone discusses the tendency of Large Language Models (LLMs) like ChatGPT to make up facts or present fictional information, they say the LLM is hallucinating. Beyond simply being a term used by the <a href="https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/" rel="noopener ugc nofollow" target="_blank">media</a>, it is also used by <a href="https://openai.com/research/instruction-following" rel="noopener ugc nofollow" target="_blank">researchers</a> and <a href="https://twitter.com/search?q=chatgpt+hallucination&src=typed_query" rel="noopener ugc nofollow" target="_blank">laypeople</a> alike to refer to whenever an LLM produces text which, in one way or another, does not correspond to reality.</p>
<p>Despite its prevalence, the term is, at best, somewhat deceptive, and at worse, actively counterproductive to thinking about what LLMs are actually doing when they produce text which is deemed problematic or untrue. It seems to me that “hallucination” is a bad term for several reasons. It both attributes properties to the LLMs they don’t have while also ignoring the real dynamics behind the production of made-up information in their outputs.</p>
<p><a href="https://betterprogramming.pub/large-language-models-dont-hallucinate-b9bdfa202edf"><strong>Read More</strong></a></p>