Letting AI Into “The Mind Club”
<p>Given how hot AI has been lately, there’s a regular debate about whether AI will become conscious. When could we say that it’s … sentient? That it has a mind?</p>
<p>I’m not gonna give you that answer, or even try. There is no widely-agreed-upon definition of what it means to be<em> </em>conscious, or how consciousness emerges in humans. It’s a super interesting question, and definitely important to explore! But as yet unanswered: Philosophers and scientist still <em>hotly </em>argue over it. Indeed, if you’ve heard any tech dudes proclaiming that today’s large-language-model chatbots are “conscious” or “sentient” or “alive” or whatevs, you are very likely in the presence of an argument that is, as we say, <em>not even wrong.</em></p>
<p>But! There is a narrower question about AI and the mind, and unlike this previous one, it’s a question we<em> can</em> begin to probe.</p>
<p>To wit: what are the situations in which we humans <em>regard</em> AI as being conscious? When — and why — do we treat machines as if they possessed a mind?</p>
<p>And what precisely are the implications of that?</p>
<p>Yeah, this is a bit of a dodge of the original question, I realize. It’s the same dodge Alan Turing used when he formulated his “Imitation Game”, i.e., the Turing Test. As Turing argued, if a chatbot can fool you into thinking it’s a human, then it is, as he puts it, a thinking machine.</p>
<p><strong><a href="https://clivethompson.medium.com/letting-ai-into-the-mind-club-48f44d12ea98">Website</a></strong></p>