Replication Controller and Replica Set in K8S
Kubernetes was designed to orchestrate multiple containers and replication. There are lots of advantages to having multiple containers and replicas. Some of them are discussed below.
Reliability:
The multiple versions of an application help to prevent problems if one or more pod fails.
We get the same 2 pods with the same name, and configuration but different uid when defined with 'replicas=2'. If any of the pods fail another comes into the picture. The replica is needed for high availability. If one pod fails, creating the same pod will take time, but the service will not be stopped as we are having a replica of it.
Load Balancing:
The traffic coming to the system can be easily directed to the different instances to prevent the overloading of a single instance or node. It is only possible if we have multiple containers running.
Scaling:
When the capability of the container reaches its maximum memory or CPU limit then Kubernetes enables us to scale up by additional instances of containers as needed quickly. Similarly, when the load is minimal the kubernetes will scale down the containers as well.
Replication Controller
The replication controller is the object in Kubernetes through which we can maintain multiple pods which ensures that some pods always exist. if the pod crashes, fails or terminates due to any reason, if it is using Replication Controller then the pod new pod with the same configuration is created. When we need to ensure that at least one pod is always running then it is recommended to use Replication Controller.
To make the Replication Controller, we need to specify 'kind: ReplicationController' in the manifests file.
For example
vi replication-controller.yml
kind: ReplicationController
apiVersion: v1
metadata:
name: my-replica
spec:
replicas: 2
selector:
app: my-app
template:
metadata:
name: my-replica-pod
labels:
app: my-app
spec:
containers:
- name: replica-container
image: ubuntu
command: ["/bin/bash","-c","while true; do echo this-is-replication-controller; sleep 5; done"]
The most important that is not to be missed is that in the selector we should specify the same labels as in the labels selected inside the template. The Replication Controller selects the pods as specified in the 'template.metadata.labels'
to create a replica.
This ensures that at least 2 pods are always running.
To apply the above configuration
kubectl apply -f replication-controller.yml
Get the replication controller
kubectl get rc
Now try to delete one pod
Before deleting the pod make sure to note the name of the pod being deleted
kubectl get pods
Delete the pod
kubectl delete pod <podname>
kubectl get pods
We can still see 2 pods running.
Replica Set
Replica Set is just like a Replication Controller. The only difference is that it has more advanced features to specify labels and selectors to identify the pods that the Replica set should manage. It also supports rolling updates and rollbacks, which allow us to update or revert the version of our application without downtime. The Replica Set supports a set-based selector also.
The Replica Set is not supported in apiVersion v1 s apiVersion 'apps/v1' should be used instead.
For example
vi rs.yml
kind: ReplicaSet
apiVersion: apps/v1
metadata:
name: my-rs
spec:
replicas: 3
selector:
matchExpressions:
- { key: myname, operator: In, values: [sandeep, sandep, sandip] }
- { key: env, operator: NotIn, values: [production] }
template:
metadata:
labels:
myname: sandeep
spec:
containers:
- name: replica-set-container
image: ubuntu
command: ["/bin/bash", "-c","while true; do echo this-is-replica-set-example; sleep 5; done"]
Similar to the Replication Controller, the Replication Set also selects the pods as specified in the 'template.metadata.labels'
to create a replica.
So we must specify the selector
with matchExpression
or matchLabels
Here matchExpression
works as the set-based selector and matchLabels
works as an equality-based selector.
kubectl apply -f rs.yml
kubectl get rs
kuebctl get pods
kubectl delete pod <podname>
kubectl get pods
Just like Replication Controller, we can see the desired number of pods are always running even if the pod gets deleted, crashed, or terminated by itself.