Skip to Content
This project is a work in progress. If you have any questions or suggestions, feel free to contact me.

LivenessProbe

  • A pod is considered as ready when all of its containers are ready.
  • In order to verify for container enough port is healthy and ready to serve traffic, Cuba Netes provides for a range of healthy checking mechanism.
  • Health checks or prbes are carried out the kubelet to determine when to restart a container (for liveness probe) And use my services and deployments to determine if a prod should receive traffic.

For example - Liveness probes could catch a deadlock, where an application is running but unable to make progress. Restoring a container in such a state can be helpful to make the application more available despite bugs.

Points to Remember

  • One use of readiness promise to control which part are used as vacants for services. the report is not ready, it is removed from service load balancer.
  • For running health checks, we would use CMD’s specific to the application.
  • If the CMD succeeds, it returns zero. And the kubelet considers the container to be alive and healthy if the command return norm zero value the kubelet will restart the pod and recreate it.
  • In the liveness probe Redefine 3 parameters -
ParameterDescripiton
commandThe command whihc will beused fo rhealth check
intialdelaysecondsAfter how much seconds the liveness probe will start health checks when the pod will be created
periodsecondsAfter how many seconds it will keep on checking periodically.
timeoutsecondsTime in seconds when the health check keep failing, liveliness probe will declare the pod as dead

Implementation

k8s_example_15.yml
apiVersion: v1 kind: Pod metadata: name: example-15 labels: test: liveness spec: containers: - name: container0 image: ubuntu args: - "/bin/sh" - "-c" - "touch /tmp/healthy; sleep 1000" livenessProbe: exec: command: - "cat" - "/tmp/healthy" initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 30

Create pod and get the details

kubectl apply -f k8s_example_15.yml kubectl describe pods
Output
controlplane $ kubectl apply -f file.yml pod/example-15 created controlplane $ kubectl describe pods Name: example-15 Namespace: default Priority: 0. . . . Liveness: exec [cat /tmp/healthy] delay=5s timeout=30s period=5s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ntqtc (ro) Conditions: . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23s default-scheduler Successfully assigned default/example-15 to node01 Normal Pulling 22s kubelet Pulling image "ubuntu" Normal Pulled 18s kubelet Successfully pulled image "ubuntu" in 3.758s (3.758s including waiting). Image size: 29754422 bytes. Normal Created 18s kubelet Created container container0 Normal Started 18s kubelet Started container container0

Execute the health check command and check for the cmd result, also execute random command for errors to generate non-zero output

kubectl exec -it example-15 -- /bin/bash
Output
controlplane $ kubectl exec -it example-15 -- /bin/bash root@example-15:/# cat /tmp/healthy root@example-15:/# echo $? 0 root@example-15:/# cat /tmp/xander cat: /tmp/xander: No such file or directory root@example-15:/# echo $? 1 root@example-15:/# rm -rf /tmp/healthy root@example-15:/# cat /tmp/healthy cat: /tmp/healthy: No such file or directory root@example-15:/# echo $? 1 root@example-15:/# cat /tmp/brook cat: /tmp/brook: No such file or directory root@example-15:/# echo $? 1 root@example-15:/# command terminated with exit code 137

If you notice once the command failed thrice - we get the prompt saying command terminated with exit code 137 that is when the pod is restarted and new container is created (check the pod description provided)

kubectl describe pods
Output
controlplane $ kubectl describe pods example-15 Name: example-15 Namespace: default Priority: 0 . . . State: Running Started: Fri, 18 Oct 2024 18:27:41 +0000 Last State: Terminated Reason: Error Exit Code: 137 Started: Fri, 18 Oct 2024 18:23:55 +0000 Finished: Fri, 18 Oct 2024 18:27:41 +0000 Ready: True Restart Count: 1 Liveness: exec [cat /tmp/healthy] delay=5s timeout=30s period=5s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ntqtc (ro) . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m19s default-scheduler Successfully assigned default/example-15 to node01 Normal Pulled 4m14s kubelet Successfully pulled image "ubuntu" in 3.758s (3.758s including waiting). Image size: 29754422 bytes. Warning Unhealthy 58s (x3 over 68s) kubelet Liveness probe failed: cat: /tmp/healthy: No such file or directory Normal Killing 58s kubelet Container container0 failed liveness probe, will be restarted Normal Pulling 28s (x2 over 4m18s) kubelet Pulling image "ubuntu" Normal Created 28s (x2 over 4m14s) kubelet Created container container0 Normal Started 28s (x2 over 4m14s) kubelet Started container container0 Normal Pulled 28s kubelet Successfully pulled image "ubuntu" in 400ms (400ms including waiting). Image size: 29754422 bytes.
Last updated on