This page is under regular updates. Please check back later for more content.
Scaling and Replication

Scaling and Replication

Overview

  • Kubernetes was designed to orchestrate multiple containers and replicate them
  • Need for multiple containers or replication helps us with these

Reliability

Reliability in Kubernetes ensures that a cluster consistently delivers services. It uses techniques like high availability, fault tolerance, data persistence, monitoring, and disaster recovery to achieve this.

Load Balancing

Having multiple versions of a container enable you to easily send traffic to different instances prevent overloading of a single instance or node.

Scaling

When load does become too much for the number of existing instances. Kubernetes enabled you to easily scale up your application by adding additional instances or pods or node

Rolling updates

Rolling Updates in Kubernetes gradually replace old pods with new ones (one by one), ensuring minimal downtime during updates.

Replication Controller

  • Replication controller is object that enable you to easily create multiple pods, then make sure that number of ports always exist and are equal to desired state.
  • If a pod is created using replication controller that is RC will be automatically replaced if that pod crash, failed or terminated
  • RC is recommended if you just want to make sure one party is always running, even after system restarts
  • You can run the RC with one replica and RC will make sure that the pod is always running

Example of Replication Controller

k8s_example_7.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: example-7
spec:
  replicas: 5
  selector:
    myname: xander
  template:
    metadata:
      name: rc-pod
      labels:
        myname: xander
    spec:
      containers:
        - name: c00
          image: ubuntu
          command: ["/bin/bash", "-c", "while true; do echo Hello-world; sleep 5 ; done"]
  • kind: ReplicationController: This define to create the object of replication Type
  • replicas: 5: Defines the desired number of pods
  • template: The element defines a template to launch a new pod
  • selectors: Tells The controller which parts to watch/belong to the RC the selectors key-values should match to the labels inside the template.

Create a pod

kubectl apply -f k8s_example_7.yml
Output
controlplane $ kubectl apply -f k8s_example_7.yml 
replicationcontroller/example-7 created

Verify the rc and pods

kubectl get rc
kubectl get pods
Output
controlplane $ kubectl get rc
NAME        DESIRED   CURRENT   READY   AGE
example-7   5         5         5       72s

For more details

kubectl describe rc example-7
Output
controlplane $ kubectl describe rc example-7
.
.
.
Replicas:     5 current / 5 desired
Pods Status:  5 Running / 0 Waiting / 0 Succeeded / 0 Failed
.
.
.
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  88s   replication-controller  Created pod: example-7-slp2s
  Normal  SuccessfulCreate  88s   replication-controller  Created pod: example-7-srcz4
  Normal  SuccessfulCreate  88s   replication-controller  Created pod: example-7-mjqq8
  Normal  SuccessfulCreate  88s   replication-controller  Created pod: example-7-s8w42
  Normal  SuccessfulCreate  88s   replication-controller  Created pod: example-7-mfmcp

Verify the pods

kubectl get pods -o wide
Output
controlplane $ kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
example-7-mfmcp   1/1     Running   0          106s   192.168.1.8   node01         <none>           <none>
example-7-mjqq8   1/1     Running   0          106s   192.168.0.5   controlplane   <none>           <none>
example-7-s8w42   1/1     Running   0          106s   192.168.0.6   controlplane   <none>           <none>
example-7-slp2s   1/1     Running   0          106s   192.168.1.6   node01         <none>           <none>
example-7-srcz4   1/1     Running   0          106s   192.168.1.7   node01         <none>           <none>

Note the labels

kubectl get pods --show-labels
Output
controlplane $ kubectl get pods --show-labels
NAME              READY   STATUS    RESTARTS   AGE     LABELS
example-7-mfmcp   1/1     Running   0          3m16s   myname=xander
example-7-mjqq8   1/1     Running   0          3m16s   myname=xander
example-7-s8w42   1/1     Running   0          3m16s   myname=xander
example-7-slp2s   1/1     Running   0          3m16s   myname=xander
example-7-srcz4   1/1     Running   0          3m16s   myname=xander

Tweak the number of desired replica

kubectl scale rc --replicas=2 -l myname=xander
Output
controlplane $ kubectl scale rc --replicas=2 -l myname=xander
replicationcontroller/example-7 scaled
controlplane $ kubectl get pods -o wide
NAME              READY   STATUS        RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
example-7-mfmcp   1/1     Terminating   0          4m7s   192.168.1.8   node01         <none>           <none>
example-7-mjqq8   1/1     Running       0          4m7s   192.168.0.5   controlplane   <none>           <none>
example-7-s8w42   1/1     Running       0          4m7s   192.168.0.6   controlplane   <none>           <none>
example-7-slp2s   1/1     Terminating   0          4m7s   192.168.1.6   node01         <none>           <none>
example-7-srcz4   1/1     Terminating   0          4m7s   192.168.1.7   node01         <none>           <none>

Notice the changes in pods as soon as the desired replica is decreased to 2 from 5, the extra pods started terminating

Verify the rc

kubectl get rc
Output
controlplane $ kubectl get rc
NAME        DESIRED   CURRENT   READY   AGE
example-7   2         2         2       12m