Verifying the Origin of Media in an Algorithmic World
<p>Distinguishing authentic human-produced media from <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them" rel="noopener ugc nofollow" target="_blank">deepfakes</a> or other algorithmically generated media is <a href="https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html" rel="noopener ugc nofollow" target="_blank">notoriously difficult</a>. Existing tools produce a probability that given media is generated, but certainty is elusive. In the coming years, verifying the authenticity of political and election-related media will become critical as algorithmic media and deepfakes have flooded online spaces. Aside from sifting through online mis-or-disinformation, there’s also value in establishing authenticity for artists who want to assert claims of originality over their digital works.</p>
<p>The kinds of things one might need to know to understand if a digital image or video is authentic, and not artificially generated or copied, might include:</p>
<ul>
<li>Some sort of cryptographically secure signature verifying the integrity of media metadata like camera information, coordinates, and other things</li>
<li>Some way of knowing that the media was not substantially digitally altered from its original form, or if it was, what those alterations were</li>
</ul>
<p>There <em>is</em> a solution to that. Below I’ll talk about the following:</p>
<p><a href="https://betterprogramming.pub/verifying-the-origin-of-media-in-an-algorithmic-world-25bff92ab572"><strong>Read More</strong></a></p>