Kubernetes-CKA练习

1.权限控制RBAC

创建名称 deployment-clusterrole 的 ClusterRole,该⻆⾊具备创建 Deployment、Statefulset、 Daemonset 的权限,在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount,绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1。

考题解析: 需要熟悉创建 serviceaccount、clusterrole 和 rolebinding 的⽅法,需要限定在 ns 级别,因此最好使 ⽤ rolebinding

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#在命名空间 app-team1 中创建名称为 cicd-token 的 ServiceAccount

root@k8s-master:~# kubectl create serviceaccount cicd-token -n app-team1
serviceaccount/cicd-token created

#创建名称deployment-clusterrole 的 ClusterRole,该⻆⾊具备创建 Deployment、Statefulset、Daemonset 的权限

root@k8s-master:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,Statefulset,Daemonset
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created

#绑定 ClusterRole 到 ServiceAccount,且限定命名空间为 app-team1

root@k8s-master:~# kubectl create rolebinding cicd-clusterrole --serviceaccount=app-team1:cicd-token --clusterrole=deployment-clusterrole -n app-team1
rolebinding.rbac.authorization.k8s.io/cicd-clusterrole created


root@k8s-master:~# kubectl describe rolebinding cicd-clusterrole -n app-team1
Name: cicd-clusterrole
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: deployment-clusterrole
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount cicd-token app-team1



参考网址:https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/

2.设置节点不可⽤

设置 ek8s-node-1 节点为不可⽤、重新调度该节点上的所有 pod

考题解析: cordon节点,drain 节点,需要忽略 daemonsets 并清除 local-data,否则可能⽆法驱逐 pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#设置 ek8s-node-1 节点为不可⽤
root@k8s-master:~# kubectl cordon k8s-master
node/k8s-master cordoned

#重新调度该节点上的所有 pod
root@k8s-master:~# kubectl drain k8s-master --ignore-daemonsets --delete-emptydir-data --force
node/k8s-master already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-4l4ll, kube-system/kube-proxy-pkz69
evicting pod kube-system/coredns-7f6cbbb7b8-kzfsd
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-xvqxg
evicting pod kube-system/coredns-7f6cbbb7b8-h92ch
pod/calico-kube-controllers-6b9fbfff44-xvqxg evicted
pod/coredns-7f6cbbb7b8-h92ch evicted
pod/coredns-7f6cbbb7b8-kzfsd evicted
node/k8s-master evicted


参考网址:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/safely-drain-node/

3.升级 kubeadm

考题概述:

升级 master 节点为1.29.1,升级前确保drain master 节点,不要升级worker node 、容器 manager、 etcd、 CNI插件、DNS 等内容;

考题解析: ⾸先 cordon、drain master节点,其次升级 kubeadm 并 apply 到1.29.1版本,升级 kubelet 和 kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#
root@k8s-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready,SchedulingDisabled control-plane,master 568d v1.22.0
k8s-node1 Ready <none> 567d v1.22.0
k8s-node2 Ready <none> 567d v1.22.0

驱除master节点pod
root@k8s-master:~# kubectl cordon k8s-master
node/k8s-master already cordoned
root@k8s-master:~# kubectl drain k8s-master --ignore-daemonsets --delete-emptydir-data --force
node/k8s-master already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-4l4ll, kube-system/kube-proxy-pkz69
node/k8s-master drained

root@k8s-master:~# apt-mark unhold kubeadm kubectl kubelet
kubeadm was already not hold.
kubectl was already not hold.
kubelet was already not hold.

root@k8s-master:~# apt-get update


答案:

切换 context
kubectl get nodes
ssh mk8s-master-0
kubectl cordon mk8s-master-0
kubectl drain mk8s-master-0 --ignore-daemonsets --delete-emptydir-data --force
apt-mark unhold kubeadm kubectl kubelet
apt-cache show kubeadm|grep 1.29.1
apt-get update && apt-get install -y kubeadm=1.29.1-00 && apt-mark hold kubeadm
kubeadm upgrade plan
kubeadm upgrade apply v1.29.1 --etcd-upgrade=false

apt-get update && apt-get install -y kubelet=1.29.1-00 kubectl=1.29.1-00 && apt-mark hold kubelet kubectl
systemctl daemon-reload && systemctl restart kubelet
kubectl uncordon mk8s-master-0kuc
检查master节点状态以及版本
kubectl get node

参考:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/


4.备份还原 etcd

考题概述: 备份 https://127.0.0.1:2379 上的 etcd 数据到 /var/lib/backup/etcd-snapshot.db,使⽤之前的⽂ 件 /data/backup/etcd-snapshot-previous.db 还原 etcd,使⽤指定的 ca.crt 、 etcd-client.crt 、 etcd-client.key

考题解析: 备份 etcd 到指定⽬录、从指定备份⽂件还原 etcd

1
2
3
4
5
6
7
8
9
10
11
12
#etcd备份
root@k8s-master:~# export ETCDCTL_API=3
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /srv/data/etcd-snapshot.db

#etcd还原
root@k8s-master:etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot restore /srv/data/etcd-snapshot-previous.db


参考:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/



5.配置⽹络策略 NetworkPolicy

考题概述: 在命名空间 设置配置环境kubectl config use-context k8s

创建一个名为allow-port-from-namespace的新NetworkPolicy,以允许现有namespace my-app
中的Pods连接到同一namespace中其他pods的端口9200。

确保新的NetworkPolicy:
-不允许对没有在监听端口9200的pods访问
-不允许不来自namespace my-app的pods的访问

考题解析: 给my-app命名空间打一个标签,复制官⽹ services-networking/network-policies 中的案例,删掉不必要的部分,设置⽹络策略 所属的 ns 为 my-app,端⼝为 9200,设置 namespaceSelector 为源ns my-app 的labels

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
切换答题环境(考试环境有多个,每道题要在对应的环境中作答)
kubectl config use-context k8s

root@k8s-master:~# kubectl get ns --show-labels
NAME STATUS AGE LABELS
app-team1 Active 553d kubernetes.io/metadata.name=app-team1
default Active 568d kubernetes.io/metadata.name=default
fubar Active 553d kubernetes.io/metadata.name=fubar
ing-internal Active 553d kubernetes.io/metadata.name=ing-internal
ingress-nginx Active 553d app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx
kube-node-lease Active 568d kubernetes.io/metadata.name=kube-node-lease
kube-public Active 568d kubernetes.io/metadata.name=kube-public
kube-system Active 568d kubernetes.io/metadata.name=kube-system
my-app Active 553d kubernetes.io/metadata.name=my-app,name=my-app

给my-app命名空间打一个标签
kubectl label ns my-app project=my-app

root@k8s-master:~# vi networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ports
namespace: my-app
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-app
ports:
- protocol: TCP
port: 9200



root@k8s-master:~# kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/allow-port-from-namespace created



参考:https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/



6.创建Service

考题概述: 重新配置已有的 deployment front-end,添加⼀个名称为 http 的端⼝,暴露80/TCP,创建名 称为 front-end-svc 的 service,暴露容器的 http 端⼝,配置service 的类别为NodePort

考题解析: 按照需要edit deploy,添加端⼝信息,通过 expose 命令使⽤ NodePort 的⽅式暴露端⼝。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
root@k8s-master:~# kubectl edit deployment front-end
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: front-end
name: front-end
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: front-end
template:
metadata:
labels:
app: front-end
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
protocol: TCP

#exit ports添加以下内容
ports:
- containerPort: 80
name: http
protocol: TCP




root@k8s-master:~# kubectl expose deployment front-end --type=NodePort --port=80 --target-port=http --name=front-end-svc
service/front-end-svc exposed

验证方法
kubectl describe svc front-end-svc


参考:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/



7.按要求创建 Ingress 资源

考题概述: 创建⼀个新的 Ingress 资源,名称 pong,命名空间 ing-internal,使⽤ /hello 路径暴露服务 hello 的 5678 端⼝

考题解析: 拷⻉官⽂的 yaml 案例,修改相关参数即可,设置成功后需要通过 curl -kl /hello 来测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
root@k8s-master:~# vi ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace:ing-internal
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678


root@k8s-master:~# kubectl get ingress -n ing-internal
NAME CLASS HOSTS ADDRESS PORTS AGE
pong <none> * 192.168.123.151,192.168.123.152 80 57s
root@k8s-master:~# curl -kl 192.168.123.151/hello
hello


参考:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/


8.扩容Deployment

考题概述: 扩容 deployment guestbook 为 6个pod

考题解析: 调整 replicas 为 6 即可,送分题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@k8s-master:~# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
front-end 1/1 1 1 553d
guestbook 2/2 2 2 553d
nfs-client-provisioner 1/1 1 1 553d
root@k8s-master:~# kubectl scale deployment --replicas=6 guestbook
deployment.apps/guestbook scaled


root@k8s-master:~# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
front-end 1/1 1 1 553d
guestbook 6/6 6 6 553d
nfs-client-provisioner 1/1 1 1 553d

参考:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/


9.调度 pod 到指定节点

考题概述: 创建pod名称nginx-kusc0041,镜像nginx,调度该pod到disk=ssd的节点上

考题解析: 拷⻉官⽂案例,修改下 pod 名称和镜像,删除多余的部分即可,送分题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
切换答题环境
root@k8s-master:~# kubectl config use-context k8s

root@k8s-master:~# vi nginx-kusc00401.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disktype: ssd



root@k8s-master:~# kubectl apply -f nginx-kusc00401.yaml




参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/



10.统计ready 状态节点数量

考题概述: 检查有多少个 nodes 已准备就绪(不包括被打上Taint: NoSchedule 的节点),并将数量写入/opt/KUSC00402/kusc00402.txt。

考题解析: describe node过滤NoSchedule的节点,统计数量输⼊指定⽂档即可,送分题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
获取Ready的节点数a
root@k8s-master:~# kubectl get nodes | grep -w Ready | wc -l
2

获取有污点和没有调度的节点数b
root@k8s-master:~# kubectl describe nodes | grep Taints | grep -i NoSchedule | wc -l
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: <none>
Taints: <none>

将a-b的结果写入目标文件
root@k8s-master:~# echo 1 > /opt/KUSC00402/kusc00402.txt

参考:https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/

11.创建多容器的pod

考点概述: 创建一个名字为kucc4的pod,在pod里面分别为以下每个images单独运行一个app container(可能会有1-4 个images):nginx+redis+memcached+consul

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@k8s-master:~# vi kucc4.yaml 
apiVersion: v1
kind: Pod
metadata:
name: kucc4
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul

root@k8s-master:~# kubectl apply -f kucc4.yaml


参考:https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/

12.按要求创建PV

考题概述: 创建名为app-data的persistent volume,容量为1Gi,访问模式ReadWriteMany。volume类型为hostPath,位于/srv/app-data。

考题解析:直接从官⽅拷⻉合适的案例,修改参数,然后设置 hostPath为/srv/app-data即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@k8s-master:~# vi app-data.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-data
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/srv/app-data"

root@k8s-master:~# kubectl apply -f app-data.yaml

root@k8s-master:~# kubectl get pv


参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

13.创建和使⽤PVC

考题概述: 创建一个新的PersistentVolumeClaim:

  • 名称:pvvolume
  • class:csi-hostpath-sc
  • 容量:10Mi

创建一个新的pod,此pod 将作为volume挂载到PersistentVolumeClaim:

  • 名称:web-server
  • image: nginx
  • 挂载路径: /usr/share/nginx/html
    配置新的pod,以对volume具有ReadWriteOnce 权限。
    最后,使用kubectl edit 或者kubectl patch 将PersistentVolumeClaim的容量扩展为70Mi,并记录此次更改。

考题解析: 根据官⽅⽂档拷⻉⼀个PVC,修改参数,复制官⽹nginx 的pod,然后添加volumeMounts和 volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#创建PVC
root@k8s-master:~# vi pvvolume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvvolume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi

root@k8s-master:~# kubectl apply -f pvvolume.yaml

#创建pod
root@k8s-master:~# vi web-server.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
volumes:
- name: pvvolume
persistentVolumeClaim:
claimName: pvvolume
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pvvolume

root@k8s-master:~# kubectl apply -f web-server.yaml

storage修改为70Mi
root@k8s-master:~# kubectl edit pvc pvvolume --save-config
resource:
requests:
stoeage: 70Mi



参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

14.监控pod的⽇志

考题概述: 监控foo pod中的⽇志 获取包含unable-to-access-website的⽇志,并将⽇志写⼊到/opt/KUTR00101/foo

考题解析: kubectl logs 获取⽇志,通过 grep 进⼀步获取⽬标⽇志,送分题

1
2
3
root@k8s-master:~# kubectl logs foo |grep unable-to-access-website > /opt/KUTR00101/foo


15.添加 sidecar 容器并输出⽇志

考题概述: 将一个现有的 Pod 集成到 Kubernetes 的内置日志记录体系结构中(例如 kubectl logs)。添加 streaming sidecar 容器是实现此要求的一种好方法。使用busybox Image来将名为sidecar的sidecar容器添加到现有的Pod 11-factor-app上,新的sidecar容器必须运行以下命令:/bin/sh -c tail -n+1 -f /var/log/11-factor-app.log

使用volume挂载/var/log/目录,确保sidecar能访问/var/log/11-factor-app.log文件

考题解析: 通过 kubectl get pod -o yaml 的⽅法备份原始 pod 信息,删除旧的pod 11-factor-app copy ⼀份新 yaml ⽂件,添加 ⼀个名称为 sidecar 的容器,新建 emptyDir 的卷,确保两个容器都挂载 了 /var/log ⽬录,新建含有 sidecar 的 pod,并通过 kubectl logs 验证;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
root@k8s-master:~# kubectl get pod 11-factor-app -oyaml >factor-app.yaml

root@k8s-master:~# vim factor-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: 11-factor-app
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/11-factor-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: sidecar
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/11-factor-app.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}


root@k8s-master:~# kubectl delete pod 11-factor-app
pod "11-factor-app" deleted

root@k8s-master:~# kubectl apply -f factor-app.yaml



参考:https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/

16.查看 cpu 使⽤率最⾼的 pod

考题概述: 通过pod label name=cpu-user,找到运行时占用大量CPU 的pod,并将占用CPU最高的pod名称写入到文件/opt/KUTR000401/KUTR00401.txt(已存在)

考题解析: 使⽤top命令,结合 -l label_key=label_value 和 –sort=cpu 过滤出⽬标即可

1
2
3
root@k8s-master:~# kubectl top pod -l name=cpu-user -A --sort-by='cpu' >>/opt/KUTR00401/KUTR00401.txt


17.排查集群中故障节点

考题概述: 节点wk8s-node-0状态为NotReady,查看原因并恢复其状态为Ready 确保操作为持久的

考题分析: 通过get nodes查看异常节点,登录节点查看kubelet等组件的status并判断原因 启动kubelet并enable kubelet即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
切换 context
kubectl get nodes
ssh wk8s-node-0
sudo -i
systemctl restart kubelet
systemctl status kubelet
systemctl enable kubelet


root@k8s-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready,SchedulingDisabled control-plane,master 569d v1.22.0
k8s-node1 Ready <none> 569d v1.22.0
k8s-node2 Ready <none> 569d v1.22.0