k8sv1.30安装教程基于docker

一. 环境准备

基于Ubuntu 22.04.5安装
每台机器分配4C+8G
这里全程使用root用户来操作,可以根据自己的情况使用不同的用户

主机名 IP
km 192.168.31.101
kn1 192.168.31.102
kn2 192.168.31.103

修改hosts文件

vim /etc/hosts

192.168.31.101 km
192.168.31.102 kn1
192.168.31.103 kn2

关闭交换分区

sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a

设置内核参数

cat >> /etc/sysctl.conf <<EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

cat >> /etc/modules-load.d/neutron.conf <<EOF
br_netfilter
EOF

sudo modprobe  br_netfilter
sudo sysctl -p

二. 安装Docker

如果有docker就跳过安装步骤,直接配置一下daemon.json文件
检查docker是否存在==> docker ps 没有提示命令没有就说明有docker

apt update
apt install -y ca-certificates curl gnupg lsb-release

# /usr/share/keyrings如果不存在需要创建
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

apt-get update
apt install docker-ce docker-ce-cli containerd.io docker-compose -y
# 这里可以配置镜像加速
cat > /etc/docker/daemon.json <<EOF
{
 "exec-opts": ["native.cgroupdriver=systemd"], # 这里将cgroupdriver设置为systemd
 "data-root": "/var/lib/docker",  # docker路径自己可以指定,如果不自定义将这行删除
 "log-driver": "json-file",
 "log-opts": {
	 "max-size": "20m",
	 "max-file": "5"
	}
}
EOF

# 重启一下docker并且设置为开机自启
systemctl restart docker.service && systemctl enable docker.service

三. 安装k8s

我这里安装的时候最新版本为1.30.5,如果后续安装出错可以按照我这个版本

apt-get install -y socat conntrack ebtables ipset
apt-get update && apt-get install -y apt-transport-https

# 设置k8s源,如果提示没有某个目录提前创建即可
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | sudo  gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

apt-get update
apt-get install -y kubelet kubeadm kubectl

systemctl enable kubelet

# 提前将镜像全部拉下来,每台机器
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.5
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.5
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.5
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.30.5
docker pull registry.aliyuncs.com/google_containers/coredns:v1.11.3
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.15-0
docker pull registry.aliyuncs.com/google_containers/pause:3.9

docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.26.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.26.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.26.1
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.26.1

安装cri

因为高版本k8s不直接支持Docker所以需要一个工具让它支持
首先下载对应版本的cri cri地址
k8sv1.30安装教程基于docker_第1张图片

这里需要下载自己操作系统对应的版本,操作如下,我的是Jammy,所以我下载Jammy版本的deb包

root@km:~# cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

然后安装cri

dpkg -i cri-dockerd_0.3.15.3-0.ubuntu-jammy_amd64.deb

安装好了不能直接使用,需要配置一下镜像地址

sed -ri 's@^(.*fd://).*$@\1 --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.9@' /usr/lib/systemd/system/cri-docker.service

systemctl daemon-reload && systemctl restart cri-docker && systemctl enable cri-docker

初始化集群(在master上操作)

注意: 这一步在master上操作,上面的所有步骤每个机器都需要进行

生成集群初始化文件

cluster.yaml 内容如下

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # master的地址
  advertiseAddress: 192.168.31.101
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent
  # 集群名称
  name: km
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
# 需要的证书存放的地址
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    # etcd的地址
    dataDir: /var/lib/etcd
# 镜像拉取地址
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 集群版本
kubernetesVersion: 1.30.5
networking:
  dnsDomain: cluster.local
  # service的网段
  serviceSubnet: 10.96.0.0/12
  # pod的网段
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# 这里需要设置为与docker的cgroupdriver一样的
cgroupDriver: systemd

初始化集群

kubeadm init --config=cluster.yaml

初始化成功会显示如下信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.101:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7ead84933b9203e1708127f82ceecb995eb1d16757f5e692df11fc1af6345976

根据提示执行如下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


# 然后执行,就可以看到有个节点了
root@km:~# kubectl get no
NAME   STATUS     ROLES           AGE   VERSION
km     NotReady   control-plane   95s   v1.30.5

加入节点 kn1和kn2,这一步在kn1和kn2上操作
一定要加上 --cri-socket unix:///run/cri-dockerd.sock

kubeadm join 192.168.31.101:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7ead84933b9203e1708127f82ceecb995eb1d16757f5e692df11fc1af6345976 --cri-socket unix:///run/cri-dockerd.sock

执行完成之后,就可以看到其他节点加入了

root@km:~# kubectl get no
NAME   STATUS     ROLES           AGE     VERSION
km     NotReady   control-plane   3m37s   v1.30.5
kn1    NotReady   <none>          7s      v1.30.5
kn2    NotReady   <none>          4s      v1.30.5

四. 配置calico网络(master上执行)

此时节点还不可用,因为需要配置网络插件,输入以下链接将内容放到 calico.yaml 中
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico-typha.yaml
注意版本和我一致
修改的地方

所有的 docker.io/calico 替换为 registry.cn-beijing.aliyuncs.com/kubesphereio

将如下配置放出来并且修改为cluster中的pod网段
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

在master上执行如下命令

kubectl apply -f calico.yaml

稍等一下,如果不出意外应该就成功了
输入命令确定一下,出现下面的输出说明集群安装OK

# STATUS全部是Running
root@km:~# kubectl get po -A -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP               NODE   NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-65bfdc67f-pmn9p   1/1     Running   0          42s   10.244.30.2      kn2    <none>           <none>
kube-system   calico-node-dnjkq                         1/1     Running   0          42s   192.168.31.101   km     <none>           <none>
kube-system   calico-node-fkgss                         1/1     Running   0          42s   192.168.31.103   kn2    <none>           <none>
kube-system   calico-node-jkmxl                         1/1     Running   0          42s   192.168.31.102   kn1    <none>           <none>
kube-system   calico-typha-554b7dc777-9lbwp             1/1     Running   0          42s   192.168.31.103   kn2    <none>           <none>
kube-system   coredns-cb4864fb5-86w4k                   1/1     Running   0          18m   10.244.30.3      kn2    <none>           <none>
kube-system   coredns-cb4864fb5-cc2bh                   1/1     Running   0          18m   10.244.30.1      kn2    <none>           <none>
kube-system   etcd-km                                   1/1     Running   0          18m   192.168.31.101   km     <none>           <none>
kube-system   kube-apiserver-km                         1/1     Running   0          18m   192.168.31.101   km     <none>           <none>
kube-system   kube-controller-manager-km                1/1     Running   0          18m   192.168.31.101   km     <none>           <none>
kube-system   kube-proxy-5m58m                          1/1     Running   0          18m   192.168.31.101   km     <none>           <none>
kube-system   kube-proxy-dssct                          1/1     Running   0          15m   192.168.31.103   kn2    <none>           <none>
kube-system   kube-proxy-gk8sk                          1/1     Running   0          15m   192.168.31.102   kn1    <none>           <none>
kube-system   kube-scheduler-km                         1/1     Running   0          18m   192.168.31.101   km     <none>           <none>


# STATUS全部是Ready
root@km:~# kubectl get no
NAME   STATUS   ROLES           AGE   VERSION
km     Ready    control-plane   21m   v1.30.5
kn1    Ready    <none>          17m   v1.30.5
kn2    Ready    <none>          17m   v1.30.5

如果出现问题请告诉我,我看到会一一解答

写作不易请给个赞吧

你可能感兴趣的:(kubernetes,docker,容器,运维,云原生)