所有节点执行
准备3台虚拟机(2核4G,CentOS 7+),配置如下:
hostnamectl set-hostname k8s-master # 在Master节点执行
hostnamectl set-hostname k8s-node1 # Worker1节点执行
hostnamectl set-hostname k8s-node2 # Worker2节点执行
IP地址 | 主机名 | 角色 |
---|---|---|
192.168.11.101 | k8s-master | Master |
192.168.11.102 | k8s-node1 | Node |
192.168.11.103 | k8s-node2 | Node |
所有节点执行以下命令:
# 卸载旧版本Docker
sudo yum remove docker\*
# 安装依赖工具
sudo yum install -y yum-utils
# 配置阿里云Docker镜像源
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本Docker
sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
# 启动Docker并设置开机自启
systemctl enable docker --now
# 配置Docker镜像加速和参数
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": [
"https://hub-mirror.c.163.com",
"https://mirror.baidubce.com",
"https://registry.docker-cn.com",
"https://mirror.ccs.tencentyun.com",
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://docker.1ms.run",
"https://hub.rat.dev",
"https://docker.1panel.live"
]
}
EOF
# 重启Docker生效
sudo systemctl daemon-reload
sudo systemctl restart docker
所有节点执行以下操作:
# 1. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 2. 关闭SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 3. 关闭Swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
systemctl reboot # 重启生效
# 4. 设置主机名(分别在每台机器执行,上面执行过就不用执行了)
# Master节点执行:
hostnamectl set-hostname k8s-master
# Node1节点执行:
hostnamectl set-hostname k8s-node1
# Node2节点执行:
hostnamectl set-hostname k8s-node2
# 5. 配置Hosts(所有节点添加以下内容)
cat >> /etc/hosts << EOF
192.168.11.101 k8s-master
192.168.11.102 k8s-node1
192.168.11.103 k8s-node2
EOF
# 6. 允许iptables检查桥接流量
cat <
所有节点执行:
# 配置阿里云Kubernetes镜像源
cat <
仅在Master节点(k8s-master)执行:
# 1. 下载所需镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
chmod +x ./images.sh && ./images.sh
# 2. 初始化Master节点 (里面的第一个ip地址就是k8s-master机器的ip,改成你自己机器的,后面两个ip网段不用动)
#所有网络范围不重叠
kubeadm init \
--apiserver-advertise-address=192.168.157.148 \
--control-plane-endpoint=k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
# 可以查看kubelet日志
journalctl -xefu kubelet
#如果初始化失败,重置kubeadm
kubeadm reset
rm -rf /etc/cni/net.d $HOME/.kube/config
#清理 iptables 规则
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
master成功后提示如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-master:6443 --token 50rexj.yb0ys92ynnxxbo2s \
--discovery-token-ca-cert-hash sha256:10fd9d2a9f4e2d7dff502aa3fb31a80f0372666efc92defde3707b499ba000e9 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master:6443 --token 50rexj.yb0ys92ynnxxbo2s \
--discovery-token-ca-cert-hash sha256:10fd9d2a9f4e2d7dff502aa3fb31a80f0372666efc92defde3707b499ba000e9
# 3. 配置使用 kubectl 命令工具和kubectl权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
仅在Master(Node)节点(k8s-node1、k8s-node2)执行:
kubeadm join k8s-master:6443 \
--token 50rexj.yb0ys92ynnxxbo2s \
--discovery-token-ca-cert-hash sha256:10fd9d2a9f4e2d7dff502aa3fb31a80f0372666efc92defde3707b499ba000e9
如果上面这个kubeadm join k8s……命令忘记了,可以使用“kubeadm token create --print-join-command”在master上重新生成。
若Node节点执行kubectl
报错,需配置环境变量:
echo "export KUBECONFIG=/etc/kubernetes/kubelet.conf" >> /etc/profile
source /etc/profile
node节点添加成功后,可以验证。
提示2个是0/1
提示2个是no……
主要是因为你的calico网络还没安装,只要第2个命令目前能看到3个节点就没问题。
[root@k8s-master calico]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-74f56488c5-4z9ds 0/1 Running 1 (23h ago) 23h #有2个是0
coredns-6554b8b87f-ttq5c 0/1 Running 1 (24m ago) 5d8h #有2个是0,具体那两个忘记了
coredns-6554b8b87f-wgsqn 0/1 Running 1 (24m ago) 5d8h
etcd-k8s-master 1/1 Running 1 (24m ago) 5d8h
kube-apiserver-k8s-master 1/1 Running 1 (24m ago) 5d8h
kube-controller-manager-k8s-master 1/1 Running 1 (24m ago) 5d8h
kube-proxy-cxhjm 1/1 Running 1 (23m ago) 5d8h
kube-proxy-lvtxh 1/1 Running 1 (24m ago) 5d8h
kube-proxy-sbc94 1/1 Running 1 (24m ago) 5d8h
kube-scheduler-k8s-master 1/1 Running 1 (24m ago) 5d8h
[root@k8s-master calico]#
[root@k8s-master calico]#
[root@k8s-master calico]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master noReady control-plane 5d8h v1.28.2 #noready,提示no……
k8s-node1 noReady 5d8h v1.28.2 #noready,提示no……
k8s-node2 noReady 5d8h v1.28.2 #noready,提示no……
[root@k8s-master calico]#
# 1. 安装Calico网络插件
curl https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml -O #作废
curl -LO https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml #作废
mkdir /root/calico && cd /root/calico/
wget https://jiangstudy.online:8081/sources/calico.yaml
修改calici.yaml的网络 ##这一步很重要,否则网络安装成功集群也不通
vi /root/calico/calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"#修改这个ip信息,这个ip信息和你# 2. 初始化Master节点的 --pod-network-cidr值保持一致
#部署calico
kubectl apply -f calico.yaml
仅在Master节点执行:
kubectl get nodes
输出应显示所有节点状态为 Ready
。以及执行kubectl get pod -A时,都是1/1。
Master节点独有操作
初始化集群 (kubeadm init
)
安装Calico网络插件
执行kubectl
命令查看集群状态
Node节点独有操作
使用kubeadm join
加入集群
配置KUBECONFIG
环境变量
所有节点共同操作
Docker安装与配置
基础环境配置(防火墙、SELinux、Swap等)
安装kubelet/kubeadm/kubectl
通过以上步骤,可清晰区分Master与Node节点的操作范围,确保集群顺利搭建。
问题1、执行5.4的时候“安装Calico网络插件”,无法下载这个网络文件。
后来使用的下载方法:wget https://jiangstudy.online:8081/sources/calico.yaml,然后一定要配置里面的ip。