LLMs for philosophers (and philosophy for LLMs)
<p><em>(Edit: as some people are reading this, I figured I’d share some of my academic work on the topic — </em><a href="http://mipmckeever.weebly.com/uploads/9/4/1/3/94130725/paper.pdf" rel="noopener ugc nofollow" target="_blank"><em>this</em></a><em> is about how to think of meaning when it comes to LLMs. The title is missing, it’s anonymous, and I put it through ChatGPT to help anonymous review but feel free to share, give feedback, etc.)</em></p>
<p>With the release of ChatGPT it feels like public awareness of the potential of large language models (LLMs) has reached a new high. And interacting with it naturally raises old and deep questions about what we’re actually seeing when we ask for a <a href="https://twitter.com/tqbf/status/1598513757805858820" rel="noopener ugc nofollow" target="_blank">biblical verse explaining how to remove a sandwich from a VCR</a> or get it to write code for us. Are we faced with an intelligent machine?</p>
<p>Many answer no. The errors LLMs can be easily led into, their lack of connection to the world beyond having read the internet, for two, mean that ‘intelligence’ is not the right description.</p>
<p><a href="https://mittmattmutt.medium.com/llms-for-philosophers-and-philosophy-for-llms-84a0da73f368"><strong>Website</strong></a></p>