CKA Self-Study
Module 2
Broad Skills offers a wide range of open source technology consulting services
CKA Self-Study Mod 2
In this module of the Broad Skills online CKA prep course we will be covering the cluster architecture, installation, and configuration topics identified by the CNCF CKA Exam Curriculum. If you are not already familiar with the curriculum, take a moment to familiarize yourself, as you will be required to know each of the topics in order to pass the test.
cka self study modules
Deployments and Rolling Updates
A Deployment is a controller that ensures an application’s pods run according to a desired state. Deployments create and control ReplicaSets, which create and remove pods according to the Deployment’s desired state. Kubelets report the current state to the Kubernetes API server. The API server compares the current state to the desired state (stored in etcd). If the current and desired states differ, the Kubernetes API server tells the kubelet(s) to make deployment changes to match the desired state.
The Deployment spec declares the desired state of pod configurations under the pod template. The following example is a Deployment of 3 nginx pods using the nginx version 1.16 image:
$apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
Roles and ClusterRoles are assigned to users and processes using RoleBindings and ClusterRoleBindings. RoleBindings associate a user, like a service account, with a Role. Any permissions granted by a Role are passed to the user through the RoleBinding.
Rolebindings can also be created imperatively using
kubectl create rolebinding
. Rolebindings bind roles to users using the
--user
flag and serviceAccounts using the
--serviceaccount
flag. The following example binds the default-appmanager role to the default namespace’s default service account::
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
Updates to the Deployment’s pod template trigger a gradual update. When a Deployment’s pod template is updated, a new ReplicaSet is created that then creates new pods based on the updated pod spec. When the new pods are created, the previous version’s ReplicaSet is scaled to zero to remove the old pods. This strategy is known as a rolling update.
The following example creates a Deployment of nginx pods with 3 replicas. The
--record
option annotates and saves the
kubectl
command for future reference. The Deployment’s rollout status and history are verified with
kubectl rollout
.
$ kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record
deployment.apps/nginx created
$ kubectl rollout status deploy nginx
deployment "nginx" successfully rolled out
$ kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record=true
$
Because the
--record
option was used to create the Deployment, the annotation is listed under the
CHANGE-CAUSE
column. If
--record
was not used to annotate then
none
would appear under
CHANGE-CAUSE
for revision 1.
Next, update the Deployment to use the nginx version 1.17 image. This update will trigger a rolling update. A new ReplicaSet will be created and the pods under old ReplicaSets will be terminated (scaled to 0). After updating the Deployment, check the rollout status immediately to capture the rolling update.
$ kubectl set image deploy nginx nginx=nginx:1.17 --record
deployment.apps/nginx image updated
$ kubectl rollout status deploy nginx
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out
$
Deployments and Rollbacks
Kubernetes allows users to undo deployment updates. Deployments can be rolled back to a previous version with
kubectl rollout undo deploy
or you can specify a specific revision.
Using the previous example, let’s look at the revisions available.
$ kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record=true
2 kubectl set image deploy nginx nginx=nginx:1.17 --record=true
$
The Deployment’s update is now under revision 2. Again, if
--record
was not used to annotate then
none
would be listed under the
CHANGE-CAUSE
column.
Next we undo the rollout to a specific revision, watch the status, and check the rollout history.
$ kubectl rollout undo deploy nginx --to-revision=1
deployment.apps/nginx rolled back
$ kubectl rollout status deploy nginx
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out
$ kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
2 kubectl set image deploy nginx nginx=nginx:1.17 --record=true
3 kubectl create deployment nginx --image=nginx:1.16 --replicas=3 --record=true
$
The Deployment is back to using the nginx 1.16 image.
Configure Applications
There are several ways to configure applications running under Kubernetes. One way is to change the command and arguments running in the container using the
command
and
args
arrays in a yaml file:
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- command:
- /bin/sh
args:
- -c
- tail -f /dev/null
image: busybox
name: busybox
Application configurations and credentials can be stored in the cluster as ConfigMap or Secret resources. Containers running in pods can consume ConfigMaps and Secrets as volumes or environment variables. ConfigMaps can be created from literal key-value pairs or from files. Below, we create a ConfigMap from a redis configuration file on disk.
$ cat redis.conf
bind 127.0.0.1
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
$ kubectl create configmap --from-file redis.conf redisconf
configmap/redisconf created
$ kubectl describe configmap redisconf
Name: redisconf
Namespace: default
Labels:
Annotations:
Data
====
redis.conf:
----
bind 127.0.0.1
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
Events:
$
The redis.conf file is now available for any pod to use and mount. ConfigMaps are a good way to make common configuration files available to applications running anywhere in a Kubernetes cluster. The example below shows a pod that runs redis using the redis.conf file stored as a ConfigMap:
piVersion: v1
kind: Pod
metadata:
labels:
run: redis-dev
name: redis-dev
spec:
containers:
- command:
- redis-server
- /config/redis.conf
image: redis
name: redis-dev
volumeMounts:
- name: redis
mountPath: /config
volumes:
- name: redis
configMap:
name: redisconf
restartPolicy: OnFailure
Scale Applications
Applications deployed using a controller like a Deployment or StatefulSet can be scaled up or down by modifying the number of replicas.
Changing the
replicas
key value in the controller’s spec will trigger an update to the application’s current replicaSet that increases (or reduces) the number of pods that run the application. This is done imperatively using
kubectl scale
:
$ kubectl scale deploy redis-prod --replicas=3
deployment.apps/redis-prod scaled
$
Or declaratively by making changes to the controller’s spec’s YAML and applying it to the cluster:
$ nano redis-prod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis-prod
name: redis-prod
spec:
replicas: 5
selector:
matchLabels:
app: redis-prod
template:
metadata:
labels:
app: redis-prod
spec:
containers:
- image: redis:4.0
name: redis
$ kubectl apply -f redis-prod.yaml
deployment.apps/redis-prod configured
$
Self-healing Applications
A self-healing application in the context of Kubernetes:
- Automatically recovers containers from an unhealthy state
- Ensures at least a single copy of the application is running at all times
- Maintain a consistent network identity
Users can create a simple self-healing application using a controller to maintain a desired state (at least one running pod) and a service to maintain a consistent network identity in the face of pod deletion/recreation.
$ kubectl create deploy apache-prod --image httpd
deployment.apps/apache-prod created
$ kubectl expose deploy apache-prod --port 80
service/apache-prod exposed
$
This creates a Deployment, which ensures at least a single copy of the application runs at all times and a Service that maintains the consistent network identity.
A liveness probe configured in the Deployment spec provides the pod’s managing kubelet with a way to check if the application is alive.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apache-prod
name: apache-prod
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
app: apache-prod
template:
metadata:
creationTimestamp: null
labels:
app: apache-prod
spec:
containers:
- image: httpd
imagePullPolicy: Always
livenessProbe:
failureThreshold: 1
httpGet:
path: /
port: 80
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: httpd
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
If this probe’s livenessProbe ever returns a failure, the kubelet tells the container runtime to restart the container and bring the application back from an unhealthy state.
Practice Drill
Create a Deployment with five replicas named
cicd
that creates pods that run the
jenkins/jenkins:lts
image.