A Non-Cynical Reading of AI Risk Letters
<p>I hadn’t planned to write about the <a href="https://www.safe.ai/statement-on-ai-risk" rel="noopener ugc nofollow" target="_blank">CAIS statement on AI Risk</a> released on May 30, but the <a href="https://twitter.com/DrTechlash/status/1665129656683761664" rel="noopener ugc nofollow" target="_blank">press goes crazy</a> every time <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener ugc nofollow" target="_blank">one of these</a> is published, so I won’t add much noise to the pile regardless. I still wouldn’t have posted this if I didn’t have anything to say to complement the takeaways I’ve seen on Twitter and the news. But I do.</p>
<p>The existential risk of AI has recently become a constant focus for the community (other risks are included but as fine print). The explanations I’ve read elsewhere for why that’s the case are incomplete at best. Loose ends are common: if Altman just wanted to get richer, why does he have <a href="https://youtu.be/fP5YdyjTfG0?t=6194" rel="noopener ugc nofollow" target="_blank">no equity</a> in OpenAI? If he just wanted political power, why was he <a href="https://blog.samaltman.com/machine-intelligence-part-1" rel="noopener ugc nofollow" target="_blank">openly talking about superintelligence</a> <em>before</em> OpenAI? If everything is about business, why are academics signing the letters?</p>
<p>I recognize this topic isn’t easy to analyze: it involves remarkable scientific progress (at least in kind if not also in impact) combined with unprecedented political and financial tensions that interact with the individual psychologies of the people involved — both researchers and builders — not to mention that they may partially hide their beliefs to protect them from public scrutiny, difficulting a clean assessment.</p>
<p><a href="https://albertoromgar.medium.com/a-non-cynical-reading-of-ai-risk-letters-ef7c8d65e010"><strong>Website</strong></a></p>