I highly encourage you to get your personal environment to practice for the exam.
The three main things I recommend you to do are:
alias k="kubectl"
export do="--dry-run=client -o yaml"
export do="--force --grace-period 0"
~/.vimrc
and include:set tabstop=2
set expandtab
set shiftwidth=2
Get used to moving between different contexts. Although these commands will be given by the CKA questions, it is important you know:
k config set-credentials <name> \
--client-certificate=<path to .crt> \
--client-key=<path to .key>
k config set-context <context name> \
--cluster=<cluster name> \
--user=<name>
k config use-context <context name>
Useful documentation:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#config
Most probably, you will need to get through objects and filter some data. You should be able to be familiar with JSON paths. This does not mean you need to memorize everything.
My advice is: to keep an idea of the basic structure of the objects (e.g., when it is a key-value or key-array), and use the following parameter:
-o jsonpath="<filter>"
Useful documentation:
https://kubernetes.io/docs/reference/kubectl/jsonpath/
Know where the configuration files for kubelet
are stored:
/var/lib/kubelet/
Know where the configuration files for kubeadm are stored:
/etc/kubernetes/admin.conf
Know where the CNI configuration files are stored:
/etc/cni/inet.d/
Know where static pod configurations are stored:
/etc/kubernetes/manifests/
Be comfortable creating and editing YAML files. By practice, get the structure of the key resources:
Most of the time, we will be using the dry run
option to create our template. This is why we created the do
environment variable at the beginning.
k -n <namespace> run <pod name> --image <image name> $do > pod-example.yaml
k -n <namespace> create deployment <dply name> --image <image name> --replicas <num> $do > dply.yaml
The following structure defines the skeleton of any K8s object:
apiVersion:
metadata:
kind:
spec:
How to see nodes and pods, as well as containers, and resource usage:
k top nodes
k top pods --containers=true
Be comfortable creating Roles, RoleBinding, ClusterRole, ClusterRoleBinding, and ServiceAccount objects.
For these tasks, I would always try to use imperative commands. It is way easier to create them using imperative commands and this saves us time.
Remember that Roles and RoleBindings are namespaced API-resources:
The most common steps will be:
Create the User or ServiceAccount.
Create the Role or ClusterRole object.
Create the RoleBinding or ClusterRoleBinding.
Test the changes with the auth can-i
command.
k auth can-i <verb> <resources> --as=<user/sa> --namespace=<namespace>
Just a quick tip here.
We can create a DaemonSet object using as a template a Deployment YAML file. We just need to change the kind
entry, and remove the replicas
, strategy
and status
entries.
k -n <ns> create deployment <dply name> --image <img name> $do > daemonset.yaml
Get familiar with crictl
to start, stop, inspect, and delete containers.
Useful documentation:
https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/
kube-scheduler is a static Pod defined in the manifests we can find under /etc/kubernetes/manifests
by default. We can stop this pod by simply moving the manifest file from this directory to another one.
The way to schedule a Pod in a node is by using the following directive inside the spec
part:
nodeName
Have in mind that nodeSelector
as well as affinity and anti-affinity directives are actually used by the scheduler to decide in which node (if there is one) our Pod will be scheduled.
We can give directives to the schedulers to choose an appropriate node by using:
nodeSelector
, it uses node labels..spec.nodeSelector -> situation in a YAML file
spec:
nodeSelector:
<label>: <value>
nodeSelector
..spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.[]
.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.[]
spec:
affinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key:
operator:
values:
-
spec:
affinity:
preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key:
operator:
values:
-
Topology spread constraints can be used to control how Pods are spread across your cluster.
Taints and tolerations are also a topic which is very easy to manage and important when we talk about scheduling.
k taint nodes <node name> <key>=<value>:<effect>
In order to remove a taint, we just need to use the same command as we did above but with a minor inclusion:
k taint nodes <node name> <key>=<value>:<effect>-
We will include a toleration in a Pod spec by using the following directives:
.spec.tolerations.[] -> situation in a YAML file
spec:
tolerations:
- key:
operator:
value:
effect:
Useful documentation:
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Suppose a node has never been initialized. Upgrade with Kubeadm will fail because there is nothing to update. We just need to focus on kubectl and kubelet upgrades. After that, it is just a matter of creating the new token to join the node to the cluster with Kubeadm.
The steps to upgrade are:
The process is different if a node is an active node or not. For an active node, we need to drain the node first. Update kubelet and then uncordon the node to make it available.
Useful documentation:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
Static Pods are not observed by the API server and are managed by Kubelet on a specific node.
If you want to create a static Pod, you need to place the YAML manifest for that Pod inside the default manifest path. This path is configured in the Kubelet configuration file (/var/lib/kubelet/config.yaml
)
After you have placed your YAML file there, just restart the kubelet service by using:
systemctl restart kubelet
It is very common that you will need to backup and restore etcd. You can do it via etcdctl. The following is an example of doing so through the etcd pod running in the cluster:
k -n kube-system exec <pod-name> -- /bin/sh -c \
"ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /var/lib/etcd/snapshot.db"