如何在同一个机器里运行 Kubernetes Control Plane Master Node 和 Worker Node (Kubernetes集群)

文章目录

  • 小结
  • 问题
  • 解决
  • 参考

小结

在Kubernetes集群的环境中,同一个机器里如何同时运行 Kubernetes Control Plane Master Node 和 Worker Node,这样同一个机器承担了两个角色,本文描述了将Kubernetes Control Plane Master Node进行设置使其承担Worker Node的功能。

问题

参考CSDN: 使用 keepalived 和 haproxy 实现Kubernetes Control Plane的高可用 (HA)
部署了一个Kubernetes Control Plane的高可用 (HA)的集群后,试图在同一个机器上添加Worker Node.

kubeadm join 192.168.238.100:4300 --token si5oek.mbrw418p8mr357qt --discovery-token-ca-cert-hash sha256:0e23eb637e09afc4c6dbb1f891409b314d5731e46fe33d84793ba2d58da006d6

返回类似以下错误:

deploy k8s-ha ,when join worker node to master,which master and worker node are in one machine ,return this error:
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

Kubectl和Kubeadm的正本如下:

[root@Master ~]# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.27.7
[root@Master ~]# 
[root@Master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:52:26Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
[root@Master ~]# 

解决

默认情况下Kubernetes Control Plane Master Node被设置为不能部署pod的,因为Control Plane节点被默认设置了以下NoSchedule标签:

[root@Master ~]# kubectl get nodes --selector='node-role.kubernetes.io/control-plane'
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   20h   v1.27.3
node1    Ready    control-plane   19h   v1.27.3
node2    Ready    control-plane   19h   v1.27.3
[root@Master ~]# 

[root@Master ~]# kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule

这个NoSchedule标签的意义如下:

  • NoSchedule: No pod will be scheduled onto the node unless it has a matching toleration. Existing pods will not be evicted from the node.
  • PreferNoSchedule: Kubernetes prevents pods that cannot tolerate this taint from being scheduled onto the node.
  • NoExecute: If the pod has been running on a node, the pod will be evicted from the node. If the pod has not been running on a node, the pod will not be scheduled onto the node.

需要去掉NoSchedule标签即可解决问题,如下操作 (以Master节点为例,其它Control Plane节点同样操作):

[root@Master ~]# kubectl taint node master node-role.kubernetes.io/control-plane:NoSchedule-
node/master untainted

注意以上有了个-小横线,是表示删除。
检查确认已经去掉:

[root@Master ~]# kubectl describe node node2 | grep Taint
Taints:             <none>
[root@Master ~]# 

可以用以下脚本同时去掉三个节点的标签:

for node in $(kubectl get nodes --selector='node-role.kubernetes.io/control-plane' | awk 'NR>1 {print $1}' ) ; do   kubectl taint node $node node-role.kubernetes.io/control-plane- ; done

注意,以上是为了测试才将 Kubernetes Control Plane Master Node承担了Worker Node的角色,一般不建议如此操作,因为Control Plane Master Node是关键组件,负责管理整个集群,包括调度集群任务和工作量,监测节点和容器运行状态等等,让Control Plane Master Node承担Worker Node功能会有负面作用,例如消耗了资源,导致时间延迟,以及系统不稳定。 最后,也有安全风险。

参考

Kubernetes: k8s-ha how to join worker node to master node ,when master and worker node are in one machine #2219
stackoverflow: Node had taints that the pod didn’t tolerate error when deploying to Kubernetes cluster
stackoverflow: Should I run “join” or “taint” after “kubeadm init”?
stackoverflow: Master tainted - no pods can be deployed
51CTO: 如何实现kubectl taint nodes --all node-role.kubernetes.io/master-的具体操作步骤
Huawei Cloud: Managing Node Taints
Scheduling workloads on control plane nodes in kubernetes – a bad idea?
CSDN: 使用 keepalived 和 haproxy 实现Kubernetes Control Plane的高可用 (HA)

你可能感兴趣的:(Kubernetes,kubernetes,容器,云原生)