The problem with anthropomorphizing algorithms

<p>More and more people are using generative algorithms&nbsp;for a growing number of tasks, whether for text or image generation. OpenAI&rsquo;s decision to apply the classic Silicon Valley philosophy of &ldquo;move fast and break things&rdquo; to machine learning and bring to market an unfinished product that gets many of its answers wrong, but that aspires to get better as more people use it seems to be working: never has there been&nbsp;such rapid technological adoption, and never has there been so much talk about it.</p> <p>However, as with any rapid popularization, the assimilation processes are far from perfect. Many of the people who have started using generative algorithms such as ChatGPT on a regular basis tend to anthropomorphize the technology, to treat it as if they were dealing with a fellow human. Since this interaction is carried out through a&nbsp;prompt, it is very common to find people requesting a task in an extremely ambiguous way, leaving in the air many of the factors that could influence the quality of the response obtained, or even asking for things as a favor or saying thank you.</p> <p>In the exams I give to some of the groups I have taught after the advent of ChatGPT and similar algorithms, it is dramatically evident which students know how to use them well by asking questions in the right way, versus those who simply ask a question without elaborating too much, or those who even copy and paste the exam question itself. We will see greater refinement and sophistication in the use of tool over time; at this stage of technological adoption, few people have fully carried out and internalized that process.</p> <p><a href="https://medium.com/enrique-dans/the-problem-with-anthropomorphizing-algorithms-e447a5b1ca8d">Visit Now</a></p>