DevOps in K8s — Pod Scheduling

<blockquote> <p><em>N</em><strong><em>ote, full &ldquo;DevOps in K8s&rdquo; mind map is available at: &ldquo;</em></strong><a href="https://github.com/metaleapca/metaleap-devops-in-k8s/blob/main/metaleap-devops-in-k8s.pdf" rel="noopener ugc nofollow" target="_blank"><strong><em>DevOps in K8s Mind Map</em></strong></a><strong><em>&rdquo;</em></strong></p> </blockquote> <p>Typically, when deploying Pods, the cluster&rsquo;s auto-scheduling policy determines their node placement. While this default scheduler looks to distribute loads evenly with ample resources, there are instances where we need more precise control.</p> <p>For example, we might want to ensure machine learning applications only run on GPU-equipped nodes. If our services are communication-intensive, it&rsquo;s ideal for their Pods to reside on a single node. To achieve this controlled scheduling, we leverage two key concepts:&nbsp;<strong>affinity</strong>&nbsp;and&nbsp;<strong>anti-affinity</strong>. Affinity itself is categorized into node affinity (nodeAffinity) and pod affinity (podAffinity).</p> <p><a href="https://tonylixu.medium.com/devops-in-k8s-pod-scheduling-fbb5dd53350d"><strong>Learn More</strong></a></p>
Tags: Pod scheduling