Brands
Training Categories
Microsoft Technical
Microsoft End User
In this module of the Broad Skills online CKA prep course we will be covering the cluster architecture, installation, and configuration topics identified by the CNCF CKA Exam Curriculum. If you are not already familiar with the curriculum, take a moment to familiarize yourself, as you will be required to know each of the topics in order to pass the test.
Roles, ClusterRoles, RoleBinding and ClusterRoleBindings control user account permissions that control how they interact with resources deployed in the cluster. ClusterRoles and ClusterRoleBindings are non-namespaced resources. Roles and RoleBindings sets permissions and bind permissions in a specific namespace.
Kubernetes uses Role-based access control (RBAC) mechanisms to control the ability of users to perform a specific task on Kubernetes objects. Clusters bootstrapped with kubeadm have RBAC enabled by default.
Permissions to API resources are granted using Roles and ClusterRoles (the only difference being that clusterRoles apply to the entire cluster while regular roles apply to their namespace). Permissions are scoped to API resources and objects under the API resources. Verbs control what operations can be performed by each role.
Roles can be created imperatively using
kubectl create role
. You can specify the API resources and verbs associated with the permissions the role will grant:
$ kubectl create role default-appmanager --resource pod,deploy,svc,ingresses --verb get,list,watch,create -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: default-appmanager
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- delete
$
Roles and ClusterRoles are assigned to users and processes using RoleBindings and ClusterRoleBindings. RoleBindings associate a user, like a service account, with a Role. Any permissions granted by a Role are passed to the user through the RoleBinding.
Rolebindings can also be created imperatively using
kubectl create rolebinding
. Rolebindings bind roles to users using the
--user
flag and serviceAccounts using the
--serviceaccount
flag. The following example binds the default-appmanager role to the default namespace’s default service account::
$ kubectl create rolebinding default-appmanager-rb \
--serviceaccount default:default \
--role default-appmanager
rolebinding.rbac.authorization.k8s.io/default-appmanager-rb created
$
Cluster upgrades involve updating the version of the Kubernetes control plane components and the kubelets that run on every node in the cluster. In general, the API server determines the version of the Kubernetes cluster. The kubelet may be up two minor versions older than the API server. The other control plane components may be up to one minor version older than the API server. The kubectl client may be one version newer or older than the API server.
The details of the version support policy are detailed on the version skew policy page in the Kubernetes documentation.
To upgrade the control-plane node we must do the following:
kubeadm upgrade plan
to check and fetch the new control plane component versionsThe following is an example of upgrading a Kubernetes control plane node from Kubernetes v1.18.0 to v.19.0 on Ubuntu 18.04:
Update the apt repository:
Update the apt repository:
$ sudo apt update
…
$
Install the newer kubeadm version e.g. v1.19.0:
$ sudo apt install kubeadm=1.19.0-00
…
$
Drain the control plan node
$ kubectl drain --ignore-daemonsets
…
$
Run
kubeadm upgrade plan
with
sudo
to check and fetch updated control plane components:
$ sudo kubeadm upgrade plan
…
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 1 x v1.18.0 v1.19.1
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.18.0 v1.19.1
kube-controller-manager v1.18.0 v1.19.1
kube-scheduler v1.18.0 v1.19.1
kube-proxy v1.18.0 v1.19.1
CoreDNS 1.6.7 1.7.0
etcd 3.4.3-0 3.4.9-1
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.19.1
Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.1.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
$
Notice that the kubelet must be upgraded manually after upgrading the control plane.
We see that v1.19.1 is available but let’s upgrade to v1.19.0:
$ sudo kubeadm upgrade apply v1.19.0
…
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
$
Install the corresponding versions of the kubelet and kubectl:
$ sudo apt install kubelet=1.19.0-00 kubectl=1.19.0-00
…
Setting up kubelet (1.19.0-00) ...
Setting up kubectl (1.19.0-00) ...
$
Uncordon the control plan node:
$ kubectl uncordon
node/ uncordoned
$
The state of a Kubernetes cluster is contained in the etcd instance(s) backing the cluster. Backing up a Kubernetes cluster is a matter of backing up the etcd instance(s).
One way to perform a backup is by using the
etcdctl
command:
$ ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt --cert=/etc/etcd/server.crt --key=/etc/etcd/server.key \
snapshot save /var/lib/etcd/backup.db
$
This command connects to an etcd cluster and saves its contents to a database file. This database file is then used to restore the entire cluster on a new set of nodes.
This command connects to an etcd cluster and saves its contents to a database file. This database file is then used to restore the entire cluster on a new set of nodes.