Labels and Node Selector in Kubernetes
Labels in Kubernetes are the key-value pair that does not have any predefined meaning and that can be attached to any objects or any resources such as Pod, Deployment, or Node which is mainly used to identify and organize Kubernetes resources. We could add an 'environment' label to a set of pods with the value 'production' and 'development' to other sets of pods. We could then use label selectors to select only the pods in the 'production' environment.
The label is defined under the metadata field with 'labels' as a field. Multiple labels can be added to a single object.
Let us take an example of creating a label.
vi label-pod.yml
kind: Pod
apiVersion: v1
metadata:
name: label-first-pod
labels:
env: developments
class: pods
spec:
containers:
- name: label-container
image: ubuntu
command: ["/bin/bash","-c","while true; do echo this-is-lable-pod; sleep 5; done"]
Now apply the above configuration
kubectl apply -f label-pod.yml
Get labels on all the pods
kubectl get pods --show-lables
If we want to add a label to an existing pod
kubectl label pods <pod-name> <key=value>
In our case, it would be something like
kubectl label pods label-first-pod myname=sandeep
Here myname will be added as a label with the value of sandeep in the pod label-first-pod
To see the labels of all the pod
kubectl get pods --show-labels
Now list pods matching a label
kubectl get pods -l env=developments
List pods where the 'developments' label is not present
kubectl get pods -l env!=developments
Now delete the pod
kubeclt delete -f label-pod.yml
Labels Selectors
The label selectors are the filter to narrow down resources based on their labels. As the name suggests, it is used to select specific pods to perform the action. It may be to specify pods to be included in a service, to be managed by deployment or rollout, or to be scaled by a horizontal pod scaler. The API currently supports two types of selectors
Equality Based: They are the kind of selectors that matches resources that have a label with a specific key and value. For example, the selector “env=development” would match any resource with the label “env=development”. We can also specify multiple selectors by separating them with a comma. For example, “env=development, myname=sandeep”.
Set Based: The set selectors match multiple values based on a set of values for a key. There are mainly three types of set-based selectors:
{key} in {values}. For example ‘environment in (production, development). It matches with a label ‘environment’ that has either production or development. It matches either production or development.
{key} notin {values}. For example ‘environment notion (production, development). It matches neither production nor development.
{key} For example, ‘environment’, matches resources with a label ‘environment’ that has any value.
Example of Equality Based selector
kubectl get pods -l class=pods, myname=sandeeep
Example of a Set Based selector
kubectl get pods -l 'env in (development, testing)'
Node selector
When we create a pod, the master node randomly assigns a node to it. If we want to assign a specific node to a pod then we use 'nodeSelector'. One use case for selecting labels is to constrain the set of nodes onto which a pod can schedule i.e. you can tell apod to only be able to run on a particular node.
Generally, such constraints are unnecessary as the scheduler will always automatically do a reasonable placement, but in certain circumstances, we might need it.
We can use labels to tag nodes and use label selectors to specify the pods to run on specific nodes.
For this purpose, first, we give a label to the node, and then we use a node selector for the pod configuration.
For labeling the nodes
kubectl get nodes
kubectl label nodes ip-xxx-xxx-xxx-xxx <label>
For example, we can do like
kubectl label nodes ip-172-31-34-55 hardware=t2-medium
Now we have labeled the node with 'hardware=t2-medium
'. To create the pod in the specified node
vi node-pod.yml
kind: Pod
apiVersion: v1
metdata:
name: node-pod
label:
env: development
class: pods
spec:
containers:
- name: container-node
image: ubuntu
command: ["/bin/bash","-c","while true; do echo this-is-lable-pod; sleep 5; done"]
nodeSelector:
hardware: t2-medium
kubectl apply -f node-pod.yml
To see where exactly the pod is running
kubectl get pods -o wide
We can also use multiple node selector criteria. For example
Label the node first
kubectl label nodes ip-xxx-xxx-xxx-xxx size=small region=us-west
vi node-pod-second.yml
kind: Pod
apiVersion: v1
metdata:
name: node-pod-second
label:
env: development
class: pods
spec:
containers:
- name: container-node
image: ubuntu
command: ["/bin/bash","-c","while true; do echo this-is-lable-pod; sleep 5; done"]
nodeSelector:
hardware: t2-medium
size: small
region: us-west
Here we have applied multiple node selectors like size and region.
kubectl apply -f node-pod-second.yml