A Non-Cynical Reading of AI Risk Letters

<p>I hadn&rsquo;t planned to write about the&nbsp;<a href="https://www.safe.ai/statement-on-ai-risk" rel="noopener ugc nofollow" target="_blank">CAIS statement on AI Risk</a>&nbsp;released on May 30, but the&nbsp;<a href="https://twitter.com/DrTechlash/status/1665129656683761664" rel="noopener ugc nofollow" target="_blank">press goes crazy</a>&nbsp;every time&nbsp;<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener ugc nofollow" target="_blank">one of these</a>&nbsp;is published, so I won&rsquo;t add much noise to the pile regardless. I still wouldn&rsquo;t have posted this if I didn&rsquo;t have anything to say to complement the takeaways I&rsquo;ve seen on Twitter and the news. But I do.</p> <p>The existential risk of AI has recently become a constant focus for the community (other risks are included but as fine print). The explanations I&rsquo;ve read elsewhere for why that&rsquo;s the case are incomplete at best. Loose ends are common: if Altman just wanted to get richer, why does he have&nbsp;<a href="https://youtu.be/fP5YdyjTfG0?t=6194" rel="noopener ugc nofollow" target="_blank">no equity</a>&nbsp;in OpenAI? If he just wanted political power, why was he&nbsp;<a href="https://blog.samaltman.com/machine-intelligence-part-1" rel="noopener ugc nofollow" target="_blank">openly talking about superintelligence</a>&nbsp;<em>before</em>&nbsp;OpenAI? If everything is about business, why are academics signing the letters?</p> <p>I recognize this topic isn&rsquo;t easy to analyze: it involves remarkable scientific progress (at least in kind if not also in impact) combined with unprecedented political and financial tensions that interact with the individual psychologies of the people involved &mdash; both researchers and builders &mdash; not to mention that they may partially hide their beliefs to protect them from public scrutiny, difficulting a clean assessment.</p> <p><a href="https://albertoromgar.medium.com/a-non-cynical-reading-of-ai-risk-letters-ef7c8d65e010"><strong>Website</strong></a></p>
Tags: AI Risk