LLMs for philosophers (and philosophy for LLMs)

<p><em>(Edit: as some people are reading this, I figured I&rsquo;d share some of my academic work on the topic &mdash;&nbsp;</em><a href="http://mipmckeever.weebly.com/uploads/9/4/1/3/94130725/paper.pdf" rel="noopener ugc nofollow" target="_blank"><em>this</em></a><em>&nbsp;is about how to think of meaning when it comes to LLMs. The title is missing, it&rsquo;s anonymous, and I put it through ChatGPT to help anonymous review but feel free to share, give feedback, etc.)</em></p> <p>With the release of ChatGPT it feels like public awareness of the potential of large language models (LLMs) has reached a new high. And interacting with it naturally raises old and deep questions about what we&rsquo;re actually seeing when we ask for a&nbsp;<a href="https://twitter.com/tqbf/status/1598513757805858820" rel="noopener ugc nofollow" target="_blank">biblical verse explaining how to remove a sandwich from a VCR</a>&nbsp;or get it to write code for us. Are we faced with an intelligent machine?</p> <p>Many answer no. The errors LLMs can be easily led into, their lack of connection to the world beyond having read the internet, for two, mean that &lsquo;intelligence&rsquo; is not the right description.</p> <p><a href="https://mittmattmutt.medium.com/llms-for-philosophers-and-philosophy-for-llms-84a0da73f368"><strong>Website</strong></a></p>