Architecture of AI-Driven Security Operations with a Low False Positive Rate
<p>Even today, in a world where LLMs <a href="https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html" rel="noopener ugc nofollow" target="_blank">compromise the integrity</a> of the educational system we used for decades, and <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener ugc nofollow" target="_blank">we (finally) started to fear</a> an <a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="noopener ugc nofollow" target="_blank">existential dread</a> from AGI, the applicability of artificial intelligence (AI) systems to non-conventional data science domains is far from achieving futuristic milestones and requires a distinct approach.</p>
<p>In this article, we have a conceptual discussion about AI applicability to <em>cyber-security</em>, <strong>why </strong>most applications fail, and <strong>what </strong>methodology actually <strong>works</strong>. Speculatively, the provided approach and conclusions are transferable to other application domains with low false-positive requirements, especially ones that rely on inference from system logs.</p>
<p>We will <strong>not </strong>cover <strong>how </strong>to implement machine learning (ML) logic on data relevant to information security. I have already provided functional implementations with code samples in the following articles:</p>
<p><a href="https://towardsdatascience.com/architecture-of-ai-driven-security-operations-with-a-low-false-positive-rate-a33dbbad55b4"><strong>Website</strong></a></p>