Climate Change is a Bigger Existential Threat Than AI

<p>It&#39;s been&nbsp;<em>the</em>&nbsp;year of fretting about AI. While there are many ethical considerations with AI, the issue of labor being one of the primary ones (see&nbsp;<a href="https://medium.com/discourse/the-work-of-art-in-the-age-of-ai-3219c7debce8" rel="noopener"><em>The Work of Art in the Age of AI</em></a>), what AI evangelists talk about is often rooted in &quot;Longtermist&quot; concerns such as the end of human civilization as we know it. &quot;Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,&quot;&nbsp;<a href="https://www.safe.ai/statement-on-ai-risk#open-letter" rel="noopener ugc nofollow" target="_blank">reads a statement</a>&nbsp;signed on by CEOS, academics, and other members of the business elite.</p> <p>This framing has always been precarious, not only because it overstates the current level of this technology (AI is nowhere near Skynet levels) but because it undercuts the&nbsp;<em>actual</em>&nbsp;existential threat we are currently facing &mdash; i.e., climate change. Over the next few years, our society will be shaken to its core, not by AI but by our warming world, and any conversation that is not grounded in dealing with these concerns is fundamentally not serious and a red herring.</p> <h2>Existential Concerns about AI are a fantasy</h2> <p>Some concerns about AI are again valid. Like with most things under capitalism, technology over the last decade has been used not to help society as a whole but to extract wealth into narrower and narrower hands. From ridesharing apps to social media, the pattern has been clear: disruption is, in actuality, the practice of using regulatory arbitrage (i.e., taking advantage of regulatory gaps in government policy) to increase profitability.</p> <p><a href="https://aninjusticemag.com/climate-change-is-a-bigger-existential-threat-than-ai-ea1f175aa9b0"><strong>Visit Now</strong></a></p>
Tags: AI Threat