What are Taints

Kubernetes taints are a mechanism for nodes to reject workload. For example, you have a node A , which is running some important workload (podA) and you don't want other pods to be deployed on this node. To achieve this we taint the node. The basic syntax of tainting a node using kubectl is as follows:

kubectl taint nodes node-name key=value:effect

If we want to taint NodeA so that only a pod having a specific toleration is allowed to be scheduled on the node , we can use the below:

kubectl taint nodes nodeA app=resourceIntensive:NoSchedule

What the above command did is it tainted the node with the key=app and the value=resourceIntensive. The effect applied on the node is NoSchedule which means no new pods can be scheduled on this node, however, if there are pods running, they will continue to operate.

There are other available effects that can be used. For this article, we will not go into the details of those.


What are Tolerations

In the previous section while talking about taints, we used the word "Tolerations" in context to a pod. Toleration is a setting on the pod which allows it to be scheduled on nodes that have specific taints. In our previous example podA is the application that we want to allow to be scheduled on nodeA , the way we do it is to add a toleration which can be done as below

apiVersion: v1
kind: Pod
metadata:
  name: heavyWorkload
spec:
  tolerations:
    - key: "app"
      operator: "Equal"
      value: "resourceIntensive"
      effect: "NoSchedule"
  containers:
    - name: sampleContainer
      image: sampleImage

What is NodeAffinity

NodeAffinity is an advanced version of NodeSelector which is basically a mechanism that allows you to control on which node a pod can be assigned based on the labels specified on the node.

Assume, we have 2 nodes - nodeA and nodeB. We have a pod - podA which we need to schedule on the nodeA. The way we would do that is as below:

  • Add a Label to the Node
    kubectl label nodes nodeA app=resourceIntensive

  • Add the nodeSelector field to the pod spec

apiVersion: v1
kind: Pod
metadata:
  name: podA
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: "app"
                operator: "In"
                values: ["resourceIntensive"]
  containers:
    - name: sampleContainer
      image: sampleImage

Once the above is applied, podA will run on NodeA. Make sure there are nodes available with the nodeAffinity labels used by the pod. If there are no nodes available with the labels, the pod will not be scheduled.


How do they work together

Taints, Tolerations and NodeAffinity when used together, offer more fine grained control on the placement of pods.
Assume a scenario, where you have 2 pods - podA and podB. PodA is a resource intensive pod and needs to be placed in NodeA only. The cluster has 2 nodes - NodeA and NodeB. Any other pods needs to be placed in the other available nodes.

To solve the part where we don't allow the other pods to be scheduled on nodeA, we can taint nodeA with a specific key=value:effect and use the same while defining the tolerations in podA. With this in place, now only podA will be allowed to get scheduled on NodeA and all other pods will only be allowed on NodeB.
However, this does not guarantee that podA will not get scheduled in NodeB. This is where NodeAffinity comes into play. We define a nodeAffinity in podA to schedule it only in NodeA.

Thus using taint and tolerations along with NodeAffinity, we can control the placement of pods in a kubernetes cluster.