DevOps in K8s — Pod Scheduling
<blockquote>
<p><em>N</em><strong><em>ote, full “DevOps in K8s” mind map is available at: “</em></strong><a href="https://github.com/metaleapca/metaleap-devops-in-k8s/blob/main/metaleap-devops-in-k8s.pdf" rel="noopener ugc nofollow" target="_blank"><strong><em>DevOps in K8s Mind Map</em></strong></a><strong><em>”</em></strong></p>
</blockquote>
<p>Typically, when deploying Pods, the cluster’s auto-scheduling policy determines their node placement. While this default scheduler looks to distribute loads evenly with ample resources, there are instances where we need more precise control.</p>
<p>For example, we might want to ensure machine learning applications only run on GPU-equipped nodes. If our services are communication-intensive, it’s ideal for their Pods to reside on a single node. To achieve this controlled scheduling, we leverage two key concepts: <strong>affinity</strong> and <strong>anti-affinity</strong>. Affinity itself is categorized into node affinity (nodeAffinity) and pod affinity (podAffinity).</p>
<p><a href="https://tonylixu.medium.com/devops-in-k8s-pod-scheduling-fbb5dd53350d"><strong>Learn More</strong></a></p>