Architecture of AI-Driven Security Operations with a Low False Positive Rate

<p>Even today, in a world where LLMs&nbsp;<a href="https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html" rel="noopener ugc nofollow" target="_blank">compromise the integrity</a>&nbsp;of the educational system we used for decades, and&nbsp;<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener ugc nofollow" target="_blank">we (finally) started to fear</a>&nbsp;an&nbsp;<a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="noopener ugc nofollow" target="_blank">existential dread</a>&nbsp;from AGI, the applicability of artificial intelligence (AI) systems to non-conventional data science domains is far from achieving futuristic milestones and requires a distinct approach.</p> <p>In this article, we have a conceptual discussion about AI applicability to&nbsp;<em>cyber-security</em>,&nbsp;<strong>why&nbsp;</strong>most applications fail, and&nbsp;<strong>what&nbsp;</strong>methodology actually&nbsp;<strong>works</strong>. Speculatively, the provided approach and conclusions are transferable to other application domains with low false-positive requirements, especially ones that rely on inference from system logs.</p> <p>We will&nbsp;<strong>not&nbsp;</strong>cover&nbsp;<strong>how&nbsp;</strong>to implement machine learning (ML) logic on data relevant to information security. I have already provided functional implementations with code samples in the following articles:</p> <p><a href="https://towardsdatascience.com/architecture-of-ai-driven-security-operations-with-a-low-false-positive-rate-a33dbbad55b4"><strong>Website</strong></a></p>
Tags: Positive Rate