最近为了准备 CKA 认证,整理了模拟题,期望能帮助到需要的小伙伴们!
You have access to multiple clusters from your main terminal through kubectl
contexts. Write all those context names into /opt/course/1/contexts
.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh
, the command should use kubectl
.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh
, but without the use of kubectl
.
### Answer ###
k config get-contexts -o name > /opt/course/1/contexts
# /opt/course/1/contexts
k8s-c1-H
k8s-c2-AC
k8s-c3-CCC
# /opt/course/1/context_default_kubectl.sh
kubectl config current-context
# /opt/course/1/context_default_no_kubectl.sh
cat ~/.kube/config | grep current
Use context: kubectl config use-context k8s-c1-H
Create a single Pod of image httpd:2.4.41-alpine
in Namespace default
. The Pod should be named pod1
and the container should be named pod1-container
. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.
### Answer ###
# First we find the controlplane node(s) and their taints:
k get node # find controlplane node
k describe node cluster1-controlplane1 | grep Taint -A1 # get controlplane node taints
k get node cluster1-controlplane1 --show-labels # get controlplane node labels
# Next we create the Pod template:
k run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml
vim 2.yaml
# 2.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
nodeSelector: # add
node-role.kubernetes.io/control-plane: "" # add
status: {}
# Now we create it:
k -f 2.yaml create
k get pod pod1 -o wide
Use context: kubectl config use-context k8s-c1-H
There are two Pods named o3db-*
in Namespace project-c13
. C13 Management asked you to scale the Pods down to one replica to save resources.
### Answer ###
k -n project-c13 get deploy,ds,sts | grep o3db
k -n project-c13 scale sts o3db --replicas 1
Use context: kubectl config use-context k8s-c1-H
Do the following in Namespace default
. Create a single Pod named ready-if-service-ready
of image nginx:1.16.1-alpine
. Configure a LivenessProbe which simply executes command true
. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80
is reachable, you can use wget -T2 -O- http://service-am-i-ready:80
for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.
Create a second Pod named am-i-ready
of image nginx:1.16.1-alpine
with label id: cross-server-ready
. The already existing Service service-am-i-ready
should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
### Answer ###
# First we create the first Pod:
k run ready-if-service-ready --image=nginx:1.16.1-alpine $do > 4_pod1.yaml
vim 4_pod1.yaml
# 4_pod1.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe: # add from here
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80' # to here
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
# Then create the Pod,And confirm it's in a non-ready state:
k -f 4_pod1.yaml create
k describe pod ready-if-service-ready
# Now we create the second Pod:
k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"
# The already existing Service service-am-i-ready should now have an Endpoint:
k describe svc service-am-i-ready
k get ep # also possible
# Which will result in our first Pod being ready, just give it a minute for the Readiness probe to check again:
k get pod ready-if-service-ready
Use context: kubectl config use-context k8s-c1-H
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh
which lists all Pods sorted by their AGE (metadata.creationTimestamp
).
Write a second command into /opt/course/5/find_pods_uid.sh
which lists all Pods sorted by field metadata.uid
. Use kubectl
sorting for both commands.
### Answer ###
# /opt/course/5/find_pods.sh
kubectl get pod -A --sort-by=.metadata.creationTimestamp
# /opt/course/5/find_pods_uid.sh
kubectl get pod -A --sort-by=.metadata.uid
Use context: kubectl config use-context k8s-c1-H
Create a new PersistentVolume named safari-pv
. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data
and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger
named safari-pvc
. It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari
in Namespace project-tiger
which mounts that volume at /tmp/safari-data
. The Pods of that Deployment should be of image httpd:2.4.41-alpine
.
### Answer ###
# 6_pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"
k -f 6_pv.yaml create
# 6_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
# Next we create a Deployment and mount that volume:
k -n project-tiger create deploy safari --image=httpd:2.4.41-alpine $do > 6_dep.yaml
vim 6_dep.yaml
# 6_dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: safari
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: safari
spec:
volumes: # add
- name: data # add
persistentVolumeClaim: # add
claimName: safari-pvc # add
containers:
- image: httpd:2.4.41-alpine
name: container
volumeMounts: # add
- name: data # add
mountPath: /tmp/safari-data # add
k -f 6_dep.yaml create
# confirm it's mounting correctly:
k -n project-tiger describe pod safari-5cbf46d6d-mjhsb | grep -A2 Mounts:
Use context: kubectl config use-context k8s-c1-H
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:
show Nodes resource usage
show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh
and /opt/course/7/pod.sh
.
### Answer ###
# /opt/course/7/node.sh
kubectl top node
# /opt/course/7/pod.sh
kubectl top pod --containers=true
Use context: kubectl config use-context k8s-c1-H
Ssh into the controlplane node with ssh cluster1-controlplane1
. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node.
Also find out the name of the DNS application and how it's started/installed in the cluster.
Write your findings into file /opt/course/8/controlplane-components.txt
. The file should be structured like:
# /opt/course/8/controlplane-components.txt kubelet: [TYPE] kube-apiserver: [TYPE] kube-scheduler: [TYPE] kube-controller-manager: [TYPE] etcd: [TYPE] dns: [TYPE] [NAME]
Choices of [TYPE]
are: not-installed
, process
, static-pod
, pod
### Answer ###
# shows kubelet process
ps aux | grep kubelet
# This means the main 4 controlplane services are setup as static Pods.
find /etc/kubernetes/manifests/
kubectl -n kube-system get pod -o wide | grep controlplane1
# Seems like coredns is controlled via a Deployment.
kubectl -n kube-system get deploy
# /opt/course/8/controlplane-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns
Use context: kubectl config use-context k8s-c2-AC
Ssh into the controlplane node with ssh cluster2-controlplane1
. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule
of image httpd:2.4-alpine
, confirm it's created but not scheduled on any node.
Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1
. Make sure it's running.
Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2
of image httpd:2.4-alpine
and check if it's running on cluster2-node1
.
### Answer ###
# Stop the Scheduler
ssh cluster2-controlplane1
kubectl -n kube-system get pod | grep schedule
cd /etc/kubernetes/manifests/
mv kube-scheduler.yaml ..
# Create a Pod,And confirm it has no node assigned
k run manual-schedule --image=httpd:2.4-alpine
k get pod manual-schedule -o wide
# Manually schedule the Pod
k get pod manual-schedule -o yaml > 9.yaml
# 9.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-09-04T15:51:02Z"
labels:
run: manual-schedule
managedFields:
...
manager: kubectl-run
operation: Update
time: "2020-09-04T15:51:02Z"
name: manual-schedule
namespace: default
resourceVersion: "3515"
selfLink: /api/v1/namespaces/default/pods/manual-schedule
uid: 8e9d2532-4779-4e63-b5af-feb82c74a935
spec:
nodeName: cluster2-controlplane1 # add the controlplane node name
containers:
- image: httpd:2.4-alpine
imagePullPolicy: IfNotPresent
name: manual-schedule
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-nxnc7
readOnly: true
dnsPolicy: ClusterFirst
...
# our Pod is running on the controlplane
k -f 9.yaml replace --force
k get pod manual-schedule -o wide
# Start the scheduler again
ssh cluster2-controlplane1
cd /etc/kubernetes/manifests/
mv ../kube-scheduler.yaml .
# Schedule a second test Pod,Back to normal
k run manual-schedule2 --image=httpd:2.4-alpine
k get pod -o wide | grep schedule
Use context: kubectl config use-context k8s-c1-H
Create a new ServiceAccount processor
in Namespace project-hamster
. Create a Role and RoleBinding, both named processor
as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
### Answer ###
# Role + RoleBinding (available in single Namespace, applied in single Namespace)
# ClusterRole + ClusterRoleBinding (available cluster-wide, applied cluster-wide)
# ClusterRole + RoleBinding (available cluster-wide, applied in single Namespace)
# Role + ClusterRoleBinding (NOT POSSIBLE: available in single Namespace, applied cluster-wide)
# We first create the ServiceAccount
k -n project-hamster create sa processor
# Then for the Role
k -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
# Now we bind the Role to the ServiceAccount
k -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger
for the following. Create a DaemonSet named ds-important
with image httpd:2.4-alpine
and labels id=ds-important
and uuid=18426a0b-5f59-4e10-923f-c0e078e82462
. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
### Answer ###
k -n project-tiger create deployment --image=httpd:2.4-alpine ds-important $do > 11.yaml
vim 11.yaml
# 11.yaml
apiVersion: apps/v1
kind: DaemonSet # change from Deployment to Daemonset
metadata:
creationTimestamp: null
labels: # add
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
name: ds-important
namespace: project-tiger # important
spec:
#replicas: 1 # remove
selector:
matchLabels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
#strategy: {} # remove
template:
metadata:
creationTimestamp: null
labels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
spec:
containers:
- image: httpd:2.4-alpine
name: ds-important
resources:
requests: # add
cpu: 10m # add
memory: 10Mi # add
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
#status: {} # remove
# It was requested that the DaemonSet runs on all nodes
k -f 11.yaml create
k -n project-tiger get ds
k -n project-tiger get pod -l id=ds-important -o wide
Use context: kubectl config use-context k8s-c1-H
Implement the following in Namespace project-tiger
:
Create a Deployment named deploy-important
with 3
replicas
The Deployment and its Pods should have label id=very-important
It should have two containers:
First named container1
with image nginx:1.17.6-alpine
Second named container2
with image google/pause
There should only ever be one Pod of that Deployment running on one worker node, use topologyKey: kubernetes.io/hostname
for this
### Answer ###
# PodAntiAffinity
k -n project-tiger create deployment --image=nginx:1.17.6-alpine deploy-important $do > 12.yaml
vim 12.yaml
# 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
id: very-important # change
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3 # change
selector:
matchLabels:
id: very-important # change
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
id: very-important # change
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: google/pause # add
name: container2 # add
affinity: # add
podAntiAffinity: # add
requiredDuringSchedulingIgnoredDuringExecution: # add
- labelSelector: # add
matchExpressions: # add
- key: id # add
operator: In # add
values: # add
- very-important # add
topologyKey: kubernetes.io/hostname # add
status: {}
# Apply and Run
k -f 12.yaml create
k -n project-tiger get deploy -l id=very-important
k -n project-tiger get pod -o wide -l id=very-important
Use context: kubectl config use-context k8s-c1-H
Create a Pod named multi-container-playground
in Namespace default
with three containers, named c1
, c2
and c3
. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.
Container c1
should be of image nginx:1.17.6-alpine
and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME
.
Container c2
should be of image busybox:1.31.1
and write the output of the date
command every second in the shared volume into file date.log
. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done
for this.
Container c3
should be of image busybox:1.31.1
and constantly send the content of file date.log
from the shared volume to stdout. You can use tail -f /your/vol/path/date.log
for this.
Check the logs of container c3
to confirm correct setup.
### Answer ###
# First we create the Pod template
k run multi-container-playground --image=nginx:1.17.6-alpine $do > 13.yaml
vim 13.yaml
# 13.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-container-playground
name: multi-container-playground
spec:
containers:
- image: nginx:1.17.6-alpine
name: c1 # change
resources: {}
env: # add
- name: MY_NODE_NAME # add
valueFrom: # add
fieldRef: # add
fieldPath: spec.nodeName # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c2 # add
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c3 # add
command: ["sh", "-c", "tail -f /vol/date.log"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: vol # add
emptyDir: {} # add
status: {}
# Then, execute the following command
k -f 13.yaml create
k get pod multi-container-playground
k exec multi-container-playground -c c1 -- env | grep MY
k logs multi-container-playground -c c3
Use context: kubectl config use-context k8s-c1-H
You're ask to find out following information about the cluster k8s-c1-H
:
How many controlplane nodes are available?
How many worker nodes are available?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-node1
?
Write your answers into file /opt/course/14/cluster-info
, structured like this:
# /opt/course/14/cluster-info
1: [ANSWER] 2: [ANSWER] 3: [ANSWER] 4: [ANSWER] 5: [ANSWER]
### Answer ###
# How many controlplane and worker nodes are available?
k get node
# What is the Service CIDR?
ssh cluster1-controlplane1
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
# Which Networking (or CNI Plugin) is configured and where is its config file?
find /etc/cni/net.d/
cat /etc/cni/net.d/10-weave.conflist
# Which suffix will static pods have that run on cluster1-node1?
The suffix is the node hostname with a leading hyphen. It used to be -static in earlier Kubernetes versions.
# /opt/course/14/cluster-info
# How many controlplane nodes are available?
1: 1
# How many worker nodes are available?
2: 2
# What is the Service CIDR?
3: 10.96.0.0/12
# Which Networking (or CNI Plugin) is configured and where is its config file?
4: Weave, /etc/cni/net.d/10-weave.conflist
# Which suffix will static pods have that run on cluster1-node1?
5: -cluster1-node1
Use context: kubectl config use-context k8s-c2-AC
Write a command into /opt/course/15/cluster_events.sh
which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp
). Use kubectl
for it.
Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log
.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1
and write the events into /opt/course/15/container_kill.log
.
Do you notice differences in the events both actions caused?
### Answer ###
# /opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp
# Now we delete the kube-proxy Pod
k -n kube-system get pod -o wide | grep proxy
k -n kube-system delete pod kube-proxy-z64cg
# Now check the events
sh /opt/course/15/cluster_events.sh
#Write the events the killing caused into /opt/course/15/pod_kill.log
# /opt/course/15/pod_kill.log
kube-system 9s Normal Killing pod/kube-proxy-jsv7t ...
kube-system 3s Normal SuccessfulCreate daemonset/kube-proxy ...
kube-system Normal Scheduled pod/kube-proxy-m52sx ...
default 2s Normal Starting node/cluster2-node1 ...
kube-system 2s Normal Created pod/kube-proxy-m52sx ...
kube-system 2s Normal Pulled pod/kube-proxy-m52sx ...
kube-system 2s Normal Started pod/kube-proxy-m52sx ...
# Finally we will try to provoke events by killing the container belonging to the container of the kube-proxy Pod
ssh cluster2-node1
crictl ps | grep kube-proxy
crictl rm 1e020b43c4423
crictl ps | grep kube-proxy
# Now we see if this caused events again and we write those into the second file
sh /opt/course/15/cluster_events.sh
# /opt/course/15/container_kill.log
kube-system 13s Normal Created pod/kube-proxy-m52sx ...
kube-system 13s Normal Pulled pod/kube-proxy-m52sx ...
kube-system 13s Normal Started pod/kube-proxy-m52sx ...
Use context: kubectl config use-context k8s-c1-H
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt
.
Find the project-*
Namespace with the highest number of Roles
defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt
.
### Answer ###
# Namespace and Namespaces Resources
k api-resources --namespaced -o name > /opt/course/16/resources.txt
# Namespace with most Roles
k -n project-c13 get role --no-headers | wc -l
k -n project-c14 get role --no-headers | wc -l
k -n project-hamster get role --no-headers | wc -l
k -n project-snake get role --no-headers | wc -l
k -n project-tiger get role --no-headers | wc -l
# Finally we write the name and amount into the file
# /opt/course/16/crowded-namespace.txt
project-c14 with 300 resources
Use context: kubectl config use-context k8s-c1-H
In Namespace project-tiger
create a Pod named tigers-reunite
of image httpd:2.4.41-alpine
with labels pod=container
and container=pod
. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl
:
Write the ID of the container and the info.runtimeType
into /opt/course/17/pod-container.txt
Write the logs of the container into /opt/course/17/pod-container.log
### Answer ###
# First we create the Pod
k -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels "pod=container,container=pod"
# Next we find out the node it's scheduled on
k -n project-tiger get pod -o wide
# Then we ssh into that node and and check the container info
ssh cluster1-node2
crictl ps | grep tigers-reunite
crictl inspect b01edbe6f89ed | grep runtimeType
# Then we fill the requested file (on the main terminal):
# /opt/course/17/pod-container.txt
b01edbe6f89ed io.containerd.runc.v2
# Finally we write the container logs in the second file
ssh cluster1-node2 'crictl logs b01edbe6f89ed' &> /opt/course/17/pod-container.log
Use context: kubectl config use-context k8s-c3-CCC
There seems to be an issue with the kubelet not running on cluster3-node1
. Fix it and confirm that cluster has node cluster3-node1
available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1
afterwards.
Write the reason of the issue into /opt/course/18/reason.txt
.
### Answer ###
# Check node status
k get node
# First we check if the kubelet is running
ssh cluster3-node1
ps aux | grep kubelet
service kubelet status
# Let's try to start kubelet
service kubelet start
service kubelet status
# We see it's trying to execute /usr/local/bin/kubelet with some parameters defined in its service config file. A good way to find errors and get more logs is to run the command manually (usually also with its parameters).
/usr/local/bin/kubelet
whereis kubelet
# there we have it, wrong path specified. Correct the path in file /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf and run
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # fix binary path
systemctl daemon-reload
service kubelet restart
service kubelet status # should now show running
# Finally we write the reason into the file
# /opt/course/18/reason.txt
wrong path to kubelet binary specified in service config
Use context: kubectl config use-context k8s-c3-CCC
Do the following in a new Namespace secret
. Create a Pod named secret-pod
of image busybox:1.31.1
which should keep running for some time.
There is an existing Secret located at /opt/course/19/secret1.yaml
, create it in the Namespace secret
and mount it readonly into the Pod at /tmp/secret1
.
Create a new Secret in Namespace secret
called secret2
which should contain user=user1
and pass=1234
. These entries should be available inside the Pod's container as environment variables APP_USER
and APP_PASS
.
Confirm everything is working.
### Answer ###
# First we create the Namespace and the requested Secrets in it:
k create ns secret
cp /opt/course/19/secret1.yaml 19_secret1.yaml
vim 19_secret1.yaml
# We need to adjust the Namespace for that Secret
# 19_secret1.yaml
apiVersion: v1
data:
halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:
creationTimestamp: null
name: secret1
namespace: secret # change
k -f 19_secret1.yaml create
# Next we create the second Secret:
k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234
# Now we create the Pod template:
k -n secret run secret-pod --image=busybox:1.31.1 $do -- sh -c "sleep 5d" > 19.yaml
vim 19.yaml
# Then make the necessary changes:
# 19.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-pod
name: secret-pod
namespace: secret # add
spec:
containers:
- args:
- sh
- -c
- sleep 1d
image: busybox:1.31.1
name: secret-pod
resources: {}
env: # add
- name: APP_USER # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: user # add
- name: APP_PASS # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: pass # add
volumeMounts: # add
- name: secret1 # add
mountPath: /tmp/secret1 # add
readOnly: true # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: secret1 # add
secret: # add
secretName: secret1 # add
status: {}
# And execute:
k -f 19.yaml create
Use context: kubectl config use-context k8s-c3-CCC
Your coworker said node cluster3-node2
is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1
. Then add this node to the cluster. Use kubeadm for this.
### Answer ###
# Upgrade Kubernetes to cluster3-controlplane1 version
k get node
ssh cluster3-node2
kubectl version
kubelet --version
kubeadm version
kubeadm upgrade node
apt update
apt show kubectl -a | grep 1.31
apt install kubectl=1.31.1-1.1 kubelet=1.31.1-1.1
kubelet --version
service kubelet restart
service kubelet status
# Add cluster3-node2 to cluster
ssh cluster3-controlplane1
kubeadm token create --print-join-command
ssh cluster3-node2
kubeadm join 192.168.100.31:6443 --token u9d0wi.hl937rbv168bpfxi --discovery-token-ca-cert-hash sha256:ad62fd26e3e454ac380d006c045fa3665ce20643d79eb0085614a02fa77749a8
Use context: kubectl config use-context k8s-c3-CCC
Create a Static Pod
named my-static-pod
in Namespace default
on cluster3-controlplane1
. It should be of image nginx:1.16-alpine
and have resource requests for 10m
CPU and 20Mi
memory.
Then create a NodePort Service named static-pod-service
which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1
internal IP address. You can connect to the internal node IPs from your main terminal.
### Answer ###
ssh cluster3-controlplane1
cd /etc/kubernetes/manifests/
kubectl run my-static-pod --image=nginx:1.16-alpine -o yaml --dry-run=client > my-static-pod.yaml
# /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-static-pod
name: my-static-pod
spec:
containers:
- image: nginx:1.16-alpine
name: my-static-pod
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
# And make sure it's running
k get pod -A | grep my-static
# Now we expose that static Pod
k expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type=NodePort --port 80
# Then run and test
k get svc,ep -l run=my-static-pod
Use context: kubectl config use-context k8s-c2-AC
Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1
. Do this with openssl or cfssl. Write the expiration date into /opt/course/22/expiration
.
Also run the correct kubeadm
command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm
command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh
.
### Answer ###
# First let's find that certificate:
cluster2-controlplane1
find /etc/kubernetes/pki | grep apiserver
# Next we use openssl to find out the expiration date
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2
# /opt/course/22/expiration
Dec 20 18:05:20 2023 GMT
# And we use the feature from kubeadm to get the expiration too
kubeadm certs check-expiration | grep apiserver
# /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver
Use context: kubectl config use-context k8s-c2-AC
Node cluster2-node1
has been added to the cluster using kubeadm
and TLS bootstrapping.
Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1
:
kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt
.
Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.
### Answer ###
# First we check the kubelet client certificate
ssh cluster2-node1
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1
# Next we check the kubelet server certificate
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1
Use context: kubectl config use-context k8s-c1-H
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend
in Namespace project-snake
. It should allow the backend-*
Pods only to:
connect to db1-*
Pods on port 1111
connect to db2-*
Pods on port 2222
Use the app
label of Pods in your policy.
After implementation, connections from backend-*
Pods to vault-*
Pods on port 3333 should for example no longer work.
### Answer ###
# First we look at the existing Pods and their labels
k -n project-snake get pod
k -n project-snake get pod -L app
# We test the current connection situation and see nothing is restricted
k -n project-snake get pod -o wide
k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
# Now we create the NP by copying and changing an example from the K8s Doc
vim 24_np.yaml
# 24_np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress # policy is only about Egress
egress:
- # first rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db1
ports: # second condition "port"
- protocol: TCP
port: 1111
- # second rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db2
ports: # second condition "port"
- protocol: TCP
port: 2222
k -f 24_np.yaml create
# And test again
k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
Use context: kubectl config use-context k8s-c3-CCC
Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db
.
Then create any kind of Pod in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.
### Answer ###
# Etcd Backup
ssh cluster3-controlplane1
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
# Etcd restore
# we stop all controlplane components
cd /etc/kubernetes/manifests/
mv * ..
watch crictl ps
# Now we restore the snapshot into a specific directory
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db --data-dir /var/lib/etcd-backup --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key
# The restored files are located at the new folder /var/lib/etcd-backup, now we have to tell etcd to use that directory
vim /etc/kubernetes/etcd.yaml
...
- hostPath:
path: /var/lib/etcd-backup # change
type: DirectoryOrCreate
name: etcd-data
...
# Now we move all controlplane yaml again into the manifest directory
mv ../*.yaml .
watch crictl ps