Machine Learning’s Public Perception Problem

<p>I was listening to a podcast recently with an assortment of intelligent, thoughtful laypeople (whose names I will not share, to be polite) talking about how AI can be used in healthcare. I had misgivings already, because they were using the term &ldquo;AI&rdquo;, which I find frequently means everything and nothing at the same time. But I listened on, and they discussed ideas for how you could incorporate AI tools (really just machine learning) into medical practice. These tools included suggesting diagnoses based on symptoms and adjusting medication dosages based on patient vitals and conditions, which seemed promising and practical.</p> <p>However, in the next moment I was a bit shocked, because one speaker (a medical doctor) said (I paraphrase) &ldquo;it seems like AI has gotten worse at math&rdquo;. This stayed with me not only through the rest of the podcast but throughout the weekend.</p> <p>When educated, smart laypeople are this confused and this misinformed about what machine learning is, we have a problem. (I&rsquo;m going to avoid using the term &ldquo;AI&rdquo; because I really believe it confuses our meaning more than it clarifies. In this context, these individuals were discussing machine learning and products employing it, even if they were unaware of it.)</p> <p>In the case of the doctor, he was likely referring to Large Language Models (LLMs) when he made the comment about math. He had somehow been led to believe that a model that is trained to arrange words in a sophisticated way in response to prompting should also be able to conduct mathematical calculations. It isn&rsquo;t good at that (it wasn&rsquo;t trained to be!) and his image of all areas of machine learning were tarnished by this reality.</p> <p><a href="https://towardsdatascience.com/machine-learnings-public-perception-problem-48daf587e7a8">Website</a></p>