Brands
Training Categories
Microsoft Technical
Microsoft End User
In this module of the Broad Skills online CKA prep course we will be covering the cluster architecture, installation, and configuration topics identified by the CNCF CKA Exam Curriculum. If you are not already familiar with the curriculum, take a moment to familiarize yourself, as you will be required to know each of the topics in order to pass the test.
A Service is an abstraction of a logical set of pods and a policy that defines inbound and network access. A service uses a selector to target pods by the pods’ label. A service exposes a logical set of pods as a network service providing a single IP address, DNS name, or load balancing to access the pods.
The Service type is defined in the manifest. The following are available Service types:
Services can be created imperatively for a running resource. At minimum the resource type, resource name, and the service’s exposed proxy port are required e.g.
kubectl expose <resource> <resource_name> --port=<port number>
.
$ kubectl create deploy webserver --image nginx
deployment.apps/webserver created
$ kubectl expose deploy webserver --port 80
service/webserver exposed
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 33d
webserver ClusterIP 10.103.175.171 80/TCP 4s
$
Services select pods using labels, and for each pod creates an endpoint resource. The endpoint resource describes all active network targets (pods) that the service routes traffic to. Each endpoint object in a cluster places an additional iptables rule with a target pod’s IP. An alternative to endpoints are EndpointSlices. EndpointSlices are conceptually and functionally similar to endpoints, but are restricted to up to 100 endpoints to improve management at scale.
$ kubectl get endpoints webserver
NAME ENDPOINTS AGE
webserver 10.32.0.8:80 43s
$ kubectl get pods -o wide -l app=webserver
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webserver-d698d7bd6-ktxvn 1/1 Running 0 83s 10.32.0.8 ubuntu
$
Ingresses are another resource that interact with Services. Ingresses bind Services to external endpoints that an Ingress controller on the cluster then exposes to the outside world. Ingresses reference Services directly in their manifests, as shown here:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: webserver-ingress
annotations:
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: webserver
servicePort: 80
The Ingress resource manages external access to Kubernetes services via HTTP and HTTPS routes. An Ingress controller is required to satisfy an Ingress. The Ingress controller reads and implements the rules of the Ingress resource.
Use the following command to set up an Ingress Controller in your Kubernetes cluster:
$ kubectl apply -f https://rx-m-k8s.s3-us-west-2.amazonaws.com/ingress-drill-setup.yaml
namespace/nginx-ingress created
serviceaccount/nginx-ingress created
clusterrole.rbac.authorization.k8s.io/nginx-ingress created
service/nginx-ingress created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
secret/default-server-secret created
deployment.apps/nginx-ingress created
$
Roles and ClusterRoles are assigned to users and processes using RoleBindings and ClusterRoleBindings. RoleBindings associate a user, like a Service Account, with a Role. Any permissions granted by a role are passed to the user through the RoleBinding.
Create the following Deployment of Apache webserver that exposes the container port 80:
$ kubectl create deploy apache-webserver --image=httpd --port=80
deployment.apps/apache-webserver created
$
Create a NodePort Service to expose the
apache-webserver
Deployment on the node port 30111 and maps port 80 on the ClusterIP to port 80 on the container:
$ kubectl create service nodeport apache-webserver --tcp=80:80 --node-port=30111
service/apache-webserver created
$
Create the following Ingress resource for the
apache-webserver
Service that controls traffic to the host domain www.example.com, exposes an http prefix path to /, routes all traffic sent to www.example.com:30111/ to the
apache-webserver
Service on port 80:ner:
$ nano apache-webserver-ingress.yaml && apache-webserver-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apache-weberver-ingress
spec:
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: apache-webserver
port:
number: 80
$ kubectl apply -f apache-webserver-ingress.yaml
ingress.networking.k8s.io/apached-webserver-ingress created
$
Test the Ingress rules with
curl --resolve www.example.com:30111: http://www.example.com:30111/
:
$ curl --resolve www.example.com:30111: http://www.example.com:30111
<h1>It works!</h1>
$
Kubernetes uses CoreDNS for DNS-based service discovery. CoreDNS is flexible and changes can be made in the ConfigMap for CoreDNS.
Every Service is assigned with a DNS name in the syntax:
<service-name>.<namespace>.svc.cluster.local
.
Pods are assigned a DNS A record in the syntax of:
<pod-hyphen-separated-ip>.<namespace>.pod.cluster.local
.
Let’s confirm the DNS entry of a service with a name server lookup with
nslookup
from within a pod.
Create a ClusterIP Service to test its DNS entry and retrieve it’s ClusterIP:
$ kubectl create service clusterip my-service --tcp=8080:8080
service/my-service created
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 10d
my-service ClusterIP 10.109.165.220 8080/TCP 5m31s
$
Run a pod with the
busybox
image and run a nslookup on the Service’s IP:
$ kubectl run busybox --image=busybox -it -- /bin/sh
If you don’t see a command prompt, try pressing enter.
/ # nslookup 10.109.165.220
Server: 10.96.0.10
Address: 10.96.0.10:53
220.165.109.10.in-addr.arpa name = my-service.default.svc.cluster.local
/ # exit
$
All cluster components that need to communicate with the API server must authenticate using a certificate signed by the cluster CA certificate. Each CA certificate must contain the user as a subject name and a group as an organization. The CA certificate that signs all of a kubeadm cluster’s component certificates are found under
/etc/kubernetes/pki
.
Kubeadm clusters automatically distribute cluster CA signed certificates to all control plane components at bootstrap. The cluster certificates are temporarily stored in the cluster as secrets for up to 2 hours after bootstrap. To reupload the cluster certificates to create a new master node, kubeadm can rerun the upload-certs phase:
$ sudo kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7d42b0fbecf1f12597591513e6b1e1009fd46bd617f33679c050abe30310b006
$
Then, create a
join
command on the master node using the certificate key to generate a join command for additional control plane nodes:
$ sudo kubeadm token create --print-join-command --certificate-key 7d42b0fbecf1f12597591513e6b1e1009fd46bd617f33679c050abe30310b006
kubeadm join 192.168.229.134:6443 \
--token yrl04z.14yaclt7m8hljjpw \
--discovery-token-ca-cert-hash sha256:50fecf38c50b760131e7ff3ae6c80d89aa01243e9c6c1d634077eedeb4940929 \
--control-plane \
--certificate-key 7d42b0fbecf1f12597591513e6b1e1009fd46bd617f33679c050abe30310b006
$
This join command instructs kubeadm to have the new worker nodes download the certificates.
For worker nodes, the process is the same; using kubeadm join instructs the target node’s kubelet to perform a TLS bootstrap to automatically request a new certificate to the cluster.
Image security is handled in different ways. One way is to control access to private repositories using
imagePullSecret
, which contains the necessary credentials to access a repository. An image pull secret is based on Docker’s config.json, which is created after using
docker login
. You can create an
imagePullSecret
imperatively by supplying your credentials to:
$ kubectl create secret docker-registry myregistry \
--docker-server=https://my.image.registry \
--docker-username=my-user --docker-password=my-pw \
--docker-email=myacc@image.registry
$
Container images that need to be pulled from the my.image.registry private registry retrieve those credentials using the
imagePullSecret
key in their spec::
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: fluentbitcommandpod
name: fluentbitcommandpod
spec:
containers:
- command:
- /fluent-bit/bin/fluent-bit
- -i
- mem
- -o
- stdout
image: myregistry/my-fluent-bit
name: fluentbitcommandpod
imagePullSecrets:
- name: myregistry
Container images can be referred to using the sha256 hash of the image. This tells the container runtime to use an exact version of the image at all times. Here is an example of updating a Kubernetes deployment using a specific image SHA:
kubectl set image deploy nginx-prod nginx=myregistry/nginx@sha256:2397b05f8a7df1cf48d51314a5e2c249f1e83a4211cd78ceb58e8372bf087f07 --record=true
Security contexts define privilege and access control settings for a Pod or Container. A
securityContext
array in a container spec under a pod enables granular control over the user or group a container runs with, the permissions granted to those users, and other options like filesystem access or the ability to run as root.
To specify a securityContext, include the
securityContext
key inside a pod or container manifest:
apiVersion: v1
kind: Pod
metadata:
name: cka-security-context
spec:
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
allowPrivilegeEscalation: false
securityContext:
runAsUser: 1000
Security contexts allow adjustment of the pod and container security posture and capability. For example, the pod in the spec above runs as a non-root user, and the container is not allowed to use privilege escalation (mechanisms like sudo).
The persistent key-value store in Kubernetes is etcd. Only the API server has access to the etcd instance running in a cluster. Access to etcd is restricted to principals bearing a certificate signed by the etcd CA. In kubeadm clusters, the etcd certificates are found under
/etc/kubernetes/pki/etcd
A client must provide the CA certificate and a client key and certificate to contact the etcd instance from outside the Kubernetes cluster. One surefire way do this is by imitating the API server’s access:
$ ps -ef | grep "kube-apiserver"
root 3288 3219 1 Feb25 ? 00:23:11 kube-apiserver
…
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
...
$
Etcd uses its own CA certificate; any clients that need to connect to etcd must have a certificate signed by this CA to communicate with etcd. By providing those certificates, you can use an external client like etcdctl to interact with the etcd cluster:
$ sudo etcdctl member list \
--endpoints 127.0.0.1:2379 \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/apiserver-etcd-client.crt \
--key /etc/kubernetes/pki/apiserver-etcd-client.key
f093f1e641b93448, started, ubuntu, https://192.168.229.134:2380, https://192.168.229.134:2379, false
$
Run the following command:
kubectl run --generator run-pod/v1 --image nginx nginx-drill
Create a NodePort Service that allows you to send a curl request to the nginx-drill pod at port 80 through your machine’s IP address.
$ First, run the command to create the pod:
$ kubectl run nginx-drill --image nginx
pod/nginx-drill created
$
Then, use kubectl expose with the --type NodePort flag to create a nodePort service imperatively. Make sure to expose the pod since that is the was created by the initial run command:
$ kubectl expose --type NodePort --port 80 pod nginx-drill
service/nginx-drill exposed
$
After exposing the pod, list the Services. You will see the nginx-drill NodePort Service maps port 80 to a port within the 30000 range:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 33d
nginx-drill NodePort 10.101.103.201 80:32402/TCP 10s
$
Finally, try to send a curl request to the nginx pod using your machine IP address:
$ ip a s | head
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:b3:d9:13 brd ff:ff:ff:ff:ff:ff
inet 192.168.229.134/24 brd 192.168.229.255 scope global ens33
valid_lft forever preferred_lft forever
$ curl 192.168.229.134:32402
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
$
As another exercise, create a ClusterIP Service called
other-svc
using
kubectl create
and use a label selector to associate it with the nginx-drill Deployment created above.