DevOps in K8s ??? Pod Scheduling

Note, full “DevOps in K8s” mind map is available at: “DevOps in K8s Mind Map

Typically, when deploying Pods, the cluster’s auto-scheduling policy determines their node placement. While this default scheduler looks to distribute loads evenly with ample resources, there are instances where we need more precise control.

For example, we might want to ensure machine learning applications only run on GPU-equipped nodes. If our services are communication-intensive, it’s ideal for their Pods to reside on a single node. To achieve this controlled scheduling, we leverage two key concepts: affinity and anti-affinity. Affinity itself is categorized into node affinity (nodeAffinity) and pod affinity (podAffinity).

Learn More

Tags: Pod scheduling