CKAd Self-Study

Module 3

Broad Skills offers a wide range of open source technology consulting services

CONTACT US

CKAD Self-Study Mod 3


In this module of the Broad Skills online CKAD prep course, we will be covering the core concepts and configuration topics identified by the CNCF CKAD Exam Curriculum. If you are not already familiar with the curriculum, take a moment to familiarize yourself as you will be required to demonstrate knowledge of each topic in order to pass the exam.

ckaD self study modules

Deployments and Rolling Updates


A Deployment is a controller that ensures an application’s pods run according to a desired state. Deployments create and control ReplicaSets, which create and remove pods according to the deployment’s desired state. Kubelets report the current state to the Kubernetes API server. The API server compares the current state to the desired state (stored in etcd). If the current and desired states differ, the Kubernetes API server tells the kubelet(s) to make deployment changes to match the desired state.



The Deployment spec declares the desired state of pod configurations under the pod template. The following example is a deployment of 3 nginx pods using the nginx version 1.16 image

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

  run: nginx

  name: nginx

spec:

  replicas: 3

  selector:

  matchLabels:

  run: nginx

  template:

  metadata:

  labels:

  run: nginx

  spec:

  containers:

  - name: nginx

  image: nginx:1.16

Updates to the Deployment’s pod template trigger a gradual update. When a Deployment’s pod template is updated, a new ReplicaSet is created that then creates new pods based on the updated pod spec. When the new pods are created, the previous version’s ReplicaSet is scaled to zero to remove the old pods. This strategy is known as a rolling update.


The following example creates a Deployment of nginx pods with 3 replicas. The --record option annotates and saves the kubectl command for future reference. The Deployment’s rollout status and history are verified with kubectl rollout .

$ kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record


deployment.apps/nginx created


$ kubectl rollout status deploy nginx


deployment "nginx" successfully rolled out


$ kubectl rollout history deploy nginx


deployment.apps/nginx

REVISION CHANGE-CAUSE

1 kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record=true


$

Because the --record option was used to create the Deployment, the annotation is listed under the CHANGE-CAUSE column. If --record was not used to annotate then none would appear under CHANGE-CAUSE for revision 1.



Next, update the Deployment to use the nginx version 1.17 image. This update will trigger a rolling update. A new ReplicaSet will be created and the pods under old ReplicaSets will be terminated (scaled to 0). After updating the Deployment, check the rollout status immediately to capture the rolling update.

$ kubectl set image deploy nginx nginx=nginx:1.17 --record


deployment.apps/nginx image updated


$ kubectl rollout status deploy nginx


Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...

Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...

deployment "nginx" successfully rolled out


$

Deployments and Rollbacks


Kubernetes allows users to undo deployment updates. Deployments can be rolled back to a previous version with kubectl rollout undo deploy or you can specify a specific revision.

Using the previous example, let’s look at the revisions available.

$ kubectl rollout undo deploy nginx --to-revision=1


deployment.apps/nginx rolled back


$ kubectl rollout status deploy nginx


Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...

Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...

deployment "nginx" successfully rolled out


$ kubectl rollout history deploy nginx


deployment.apps/nginx

REVISION CHANGE-CAUSE

2 kubectl set image deploy nginx nginx=nginx:1.17 --record=true

3 kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record=true


$

The Deployment is back to using the nginx 1.16 image.

Jobs and CronJobs

Jobs complete tasks from start to finish. A job is complete when the pod finishes the task and the pod exits successfully on completion.
 
There are three types of jobs:

  • Non-parallel jobs - a job that runs one pod
  • Parallel jobs with a fixed completion - jobs run multiple pods in parallel and defines the number of completions when the job is finished
  • Parallel jobs without a fixed completion - jobs run multiple pods in parallel and when one pod is successful then the job is complete and all other pods terminate; this is also called a work queue

 
The following manifest describes a parallel job with a fixed number of completions. The job outputs the date to the container’s standard out. The job will run 5 pods in parallel and stop after 20 successful completions.

apiVersion: batch/v1

kind: Job

metadata:

  name: date-job

spec:

  parallelism: 5

  completions: 20

  template:

  metadata:

  name: date-job

  spec:

  containers:

  - name: busybox

  image: busybox

  command:

  - /bin/sh

  - -c

  - date

  restartPolicy: OnFailure

At the end of this Job, there would be 20 completed pods. Obtaining the container log for any of the 20 pods outputs the date the container ran.

CronJobs run Jobs on a schedule and are used to automate tasks. The following CronJob manifest creates a CronJob that runs every minute and outputs the date to the container’s standard out.

apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: cron-job

spec:

  schedule: "*/1 * * * *"

  jobTemplate:

  spec:

  template:

  spec:

  containers:

  - name: busybox

  image: busybox

  args:

  - /bin/sh

  - -c

  - date

  restartPolicy: OnFailure

Labels, Selectors, Annotations

JLabels are key/value pairs attached to Kubernetes objects such as pods, persistent volumes, and cluster nodes. Labels help manage and organize Kubernetes objects into logical groups and can also be used to qualify Kubernetes objects for resources to execute on. For example, a network policy targets pods within the same namespace using labels on pods.
 
Some commands use selectors to identify and select Kubernetes objects by their labels. Selectors are used with the
-l or --selector flag that filters on labels.
 
There are two selector types:

  • Equality/Inequality-based
  • = or == for equality
  • != for inequality.
  • Set-based
  • in for labels that have keys with values in this set
  • notin for labels that have keys not in this set
  • key_name for labels with the key name


Take a look at using labels and selectors. Run the following deployments and jobs to launch pods with an environment and a release label. The pods can have an environment label of
prod , dev , or qa and a release label with stable or edg . Then use selectors to filter for pods using labels.

N.B. When creating deployments, the first
-l option labels the deployment and the second -l option labels pods.

$ kubectl run nginx-deploy --image=nginx:1.9 --replicas=5 \

-l environment=prod -l environment=prod,release=stable


kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx-deploy created


$ kubectl run nginx-pod --generator=run-pod/v1 --image=nginx:latest \

-l environment=dev,release=edge


pod/nginx-pod created


$ kubectl run nginx-qa --image=nginx:latest --replicas=3 \

-l environment=qa -l environment=qa,release=edge


kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx-qa created


$

Now we have 9 pods running.

$ kubectl get pods


NAME READY STATUS RESTARTS AGE

nginx-deploy-86f8b8c8d4-8j78k 1/1 Running 0 21s

nginx-deploy-86f8b8c8d4-cbsbz 1/1 Running 0 21s

nginx-deploy-86f8b8c8d4-jq2cb 1/1 Running 0 21s

nginx-deploy-86f8b8c8d4-l8ck8 1/1 Running 0 21s

nginx-deploy-86f8b8c8d4-smfmf 1/1 Running 0 21s

nginx-pod 1/1 Running 0 14s

nginx-qa-55d6b56d5c-fssmn 1/1 Running 0 8s

nginx-qa-55d6b56d5c-mmsp9 1/1 Running 0 8s

nginx-qa-55d6b56d5c-xlks7 1/1 Running 0 8s


$

Let’s use selectors to filter labels to identify the appropriate pods that we’re looking for.

Let’s get pods that are not running in production

$ kubectl get pod -l environment!=prod --show-labels


NAME READY STATUS RESTARTS AGE LABELS

nginx-pod 1/1 Running 0 67s environment=dev,release=edge

nginx-qa-55d6b56d5c-fssmn 1/1 Running 0 61s environment=qa,pod-template-hash=55d6b56d5c,release=edge

nginx-qa-55d6b56d5c-mmsp9 1/1 Running 0 61s environment=qa,pod-template-hash=55d6b56d5c,release=edge

nginx-qa-55d6b56d5c-xlks7 1/1 Running 0 61s environment=qa,pod-template-hash=55d6b56d5c,release=edge


$

We can also retrieve non-production pods with set-based requirements:

$ kubectl get pods -l "environment notin (prod)" --show-labels


NAME READY STATUS RESTARTS AGE LABELS

nginx-pod 1/1 Running 0 84s environment=dev,release=edge

nginx-qa-55d6b56d5c-fssmn 1/1 Running 0 78s environment=qa,pod-template-hash=55d6b56d5c,release=edge

nginx-qa-55d6b56d5c-mmsp9 1/1 Running 0 78s environment=qa,pod-template-hash=55d6b56d5c,release=edge

nginx-qa-55d6b56d5c-xlks7 1/1 Running 0 78s environment=qa,pod-template-hash=55d6b56d5c,release=edge


$

Using the comma separator acts like a logical and ( && ) operator. The following example lists pods in the dev environment and with an edge release:

$ kubectl get pods -l environment=dev,release=edge --show-labels


NAME READY STATUS RESTARTS AGE LABELS

nginx-pod 1/1 Running 0 108s environment=dev,release=edge


$

Annotations are similar to labels in that they are metadata key/value pairs. Annotations differ from labels in that they are not used for object selection but are typically used by external applications. Annotations are retrievable by API clients, tools, and libraries. Annotations are created in the manifest.

The following example is a pod manifest with annotations for build and image information.

apiVersion: v1

kind: Pod

metadata:

  name: hostinfo

  annotations:

  build: on

  builder: rxmllc

  imageregistery: “https://hub.docker.com/r/rxmllc/hostinfo”

spec:

  containers:

  - name: hostinfo

  image: rxmllc/hostinfo

  restartPolicy: Never

Persistent Volume Claims

A Kubernetes persistent volume exists outside the lifecycle of any pod that mounts it. Persistent volumes are storage objects managed by the Kubernetes cluster and provisioned from the cluster’s infrastructure (like the host’s filesystem).
 
Persistent volumes describe details of a storage implementation for the cluster, including:

  • Access modes for the volume
  • The total capacity of the volume
  • What happens to the data after the volume is unclaimed
  • The type of storage
  • An optional, custom storage class identifier

 
Persistent volume claims are an abstraction of persistent volumes. A persistent volume claim is a request for storage. Persistent volume claims bind to existing persistent volumes on a number of factors like label selectors, storage class name, storage capacity, and access mode. Persistent volume claims can dynamically create persistent volumes using an existing storage class. Pods bind to persistent volume claims by name in the pod’s manifest.
 
Let’s see how pods bind to a persistent volume claim and how a persistent volume claim binds to a persistent volume.
 
The manifest below is for a persistent volume with the following characteristics:

  • Label of k8scluster: master
  • Storage class name is local
  • Storage capacity is 200Mi
  • One node can mount the volume as read-write (access mode = ReadWriteOnce)
  • The persistent volume is released when a bounded persistent volume claim is deleted but not available until the persistent volume is deleted (persistentVolumeReclaimPolicy = Retain)
  • Mounted to a host path of /home/ubuntu/persistentvolume.


apiVersion: v1

kind: PersistentVolume

metadata:

  name: local-volume

  labels:

  k8scluster: master

spec:

  storageClassName: local

  capacity:

  storage: 200Mi

  accessModes:

  - ReadWriteOnce

  persistentVolumeReclaimPolicy: Retain

  hostPath:

  path: /home/ubuntu/persistentvolume

In the example above, a persistent volume claim can use one or more of the following to bind to the persistent volume: label, storage class name, storage capacity, and access mode.

The following example describes a persistent volume claim that binds to the ‘local-volume’ persistent volume by using a selector to select the label
k8scluster:master , storage class name of local, and matching storage capacity and access mode.

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: local-pvc

spec:

  accessModes:

  - ReadWriteOnce

  resources:

  requests:

  storage: 200Mi

  storageClassName: local

  selector:

  matchLabels:

  k8scluster: master

Now let’s create a pod that binds to the persistent volume claim. The following pod manifest binds to the persistent volume claim by name and mounts the volume to the container’s /usr/share/nginx/html directory.

apiVersion: v1

kind: Pod

metadata:

  name: nginx

spec:

  containers:

  - name: nginx

  image: nginx:latest

  volumeMounts:

  - name: data

  mountPath: /usr/share/nginx/html

  volumes:

  - name: data

  persistentVolumeClaim:

  claimName: local-pvc

After creating this pod. Verify the binding by describing the persistent volume claim and grep for “Mounted By” then describe the pod and grep for “Volumes”

$ kubectl describe pvc local-pvc | grep “Mounted By”


Mounted By: nginx


$ kubectl describe pod nginx | grep -A3 Volumes


Volumes:

  data:

  Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

  ClaimName: local-pvc


$

Practice Drill

  • Create a Deployment that creates 2 replicas of pods using the nginx:1.9 image.
  • Update the Deployment to use the latest nginx image.
  • Undo the image update and rollback the Deployment to use the nginx 1.9 image.


  • Practice Drill: Answer

    $ kubectl create deployment nginx --image=nginx:1.9 --replicas=2 --record


    deployment.apps/nginx created


    $ kubectl set image deploy nginx nginx=nginx:latest --record


    deployment.apps/nginx image updated


    $ kubectl rollout undo deploy nginx


    deployment.apps/nginx rolled back


    $

Courses

150+

Learners

50k+

Trainers

46

 Promotion Rate

90%

  • How We Are Responding to Covid-19

    In these trying times, Broad Skills remains fully operational. We are committed to the success of our clients and keeping the global economy moving. All of our courses and consulting services are available virtually and we stand ready to support the needs of our clients globally through distance learning and video conferencing.


    We will be bolstering our open enrollment calendar of courses to meet customer demand, however, for the health and safety of our clients and staff, existing open enrollment engagements will be moved to virtual delivery until conditions make in-person offerings suitable again.


We are experts in technology – providing a comprehensive suite of training services that

not only challenge your mind but also give you job required skills that put you in pole position to contribute largely success and growth of your organisation.

Stephen Brown

Instructor, BroadSkills

Share by: