阿里云部署docker与k8s

置顶信息:
1、远程登录密码统一设置为:appuser2022@devA
2、阿里云公网地址:47.251.81.11(可变)

安装docker-ce(yum源)

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

安装k8s(yum源)

cat <

更新yum源
yum repolist

阿里云开源镜像:
https://developer.aliyun.com/mirror/

安装docker-ce-20.10.0 (已指定版本)

yum list --help
yum list --showduplicates | grep docker-ce 
yum install docker-ce-20.10.0

设置docker开机自启动
systemctl enable docker --now

查看docker镜像

# docker ps
# docker images

拉取镜像

# docker pull registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0

运行容器

# docker run -itd -P  registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0

查询镜像源中的kubeadm,用于指定具体版本

# yum list --showduplicates | grep kubeadm

安装kubeadm、kubelet、kubelet

yum install kubeadm-1.23.0 kubelet-1.23.0 kubectl-1.23.0 -y

k8s安装初始化(需要指定网络和版本,网络通常一般是calico)

kubeadm init --help
kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version v1.23.0

k8s初始化异常报错:驱动不匹配

解决:修改docker默认驱动
vim /etc/docker/daemon.json

百度搜索‘cgroupdrive’

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

k8s重新初始化安装

systemctl daemon-reload
systemctl restart docker
kubeadm reset
kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version v1.23.0

k8s安装完毕会出现如下文字:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/ad...
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.25.46.13:6443 --token 0n2fg6.i72jaze8vu0tjpm2 \
        --discovery-token-ca-cert-hash sha256:cd71dc867ce22589783429cb541f42ad56dc293345d5e8e1b9679ff10e6c780b 

初始化配置k8s:

[root@iZrj90hd0bvgak5zx3clwaZ ~]# mkdir -p $HOME/.kube
[root@iZrj90hd0bvgak5zx3clwaZ ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@iZrj90hd0bvgak5zx3clwaZ ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查询k8s的nodespace:

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   61s
kube-node-lease   Active   62s
kube-public       Active   62s
kube-system       Active   62s

查询k8s的no

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get no
NAME                      STATUS     ROLES                  AGE   VERSION
izrj9i7i0z7jcwcck3kq2vz   NotReady   control-plane,master   69s   v1.23.0

k8s的官方文档:
https://kubernetes.io/docs/home/
calico的官方文档,用于适配k8s的网络:
https://docs.tigera.io/calico/latest/getting-started/kubernet...

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

Pod健康状态由pend变为running

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get po -A
NAMESPACE         NAME                                             READY     STATUS         RESTARTS       AGE
calico-system     calico-kube-controllers-6f4bf66c57-dfnr9          0/1     Pending             0          23s
calico-system     calico-node-vcbbk                                 0/1     Init:1/2            0          23s
calico-system     calico-typha-7cc8775f9-xgvh5                      1/1     Running             0          23s
calico-system     csi-node-driver-ps7ln                             0/2     ContainerCreating   0          23s
kube-system       coredns-64897985d-6dd8q                           0/1     Pending             0          3m24s
kube-system       coredns-64897985d-gmw7p                           0/1     Pending             0          3m24s
kube-system       etcd-izrj9i7i0z7jcwcck3kq2vz                      1/1     Running             1          3m39s
kube-system       kube-apiserver-izrj9i7i0z7jcwcck3kq2vz            1/1     Running             1          3m39s
kube-system       kube-controller-manager-izrj9i7i0z7jcwcck3kq2vz   1/1     Running             1          3m37s
kube-system       kube-proxy-tblcx                                  1/1     Running             0          3m24s
kube-system       kube-scheduler-izrj9i7i0z7jcwcck3kq2vz            1/1     Running             1          3m37s
tigera-operator   tigera-operator-6fb9964c84-9kx8b                  1/1     Running             0          82s

变更为:

calico-apiserver   calico-apiserver-fc578d57b-5vm5d                  1/1     Running   0          68s
calico-apiserver   calico-apiserver-fc578d57b-jk4xl                  1/1     Running   0          68s
calico-system      calico-kube-controllers-6f4bf66c57-dfnr9          1/1     Running   0          2m13s
calico-system      calico-node-vcbbk                                 1/1     Running   0          2m13s
calico-system      calico-typha-7cc8775f9-xgvh5                      1/1     Running   0          2m13s
calico-system      csi-node-driver-ps7ln                             2/2     Running   0          2m13s
kube-system        coredns-64897985d-6dd8q                           1/1     Running   0          5m14s
kube-system        coredns-64897985d-gmw7p                           1/1     Running   0          5m14s
kube-system        etcd-izrj9i7i0z7jcwcck3kq2vz                      1/1     Running   1          5m29s
kube-system        kube-apiserver-izrj9i7i0z7jcwcck3kq2vz            1/1     Running   1          5m29s
kube-system        kube-controller-manager-izrj9i7i0z7jcwcck3kq2vz   1/1     Running   1          5m27s
kube-system        kube-proxy-tblcx                                  1/1     Running   0          5m14s
kube-system        kube-scheduler-izrj9i7i0z7jcwcck3kq2vz            1/1     Running   1          5m27s
tigera-operator    tigera-operator-6fb9964c84-9kx8b                  1/1     Running   0          3m12s
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get no
NAME                      STATUS   ROLES                  AGE     VERSION
izrj9i7i0z7jcwcck3kq2vz   Ready    control-plane,master   6m43s   v1.23.0

k8s增加节点用到的命令:

kubeadm join 172.25.46.15:6443 --token kbo3vw.wdwl5q6fa3a4lzes \
        --discovery-token-ca-cert-hash sha256:0abcf9c2335ecde21f97e9db304033dbad61cf609e8c1459a9b56315cd7af2b1 

部署应用:刚才魔方用docker部署现在用k8s来部署
打开k8s官网搜索”deployment”
https://kubernetes.io/zh-cn/docs/concepts/workloads/controlle...

vim cube.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

//把nginx全部替换为cube,变更了images源
:%s/nginx/cube/g

原理:通过创建serivce,service需要与后端的pod连接到一起,就需要labels,
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# cat cube.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cube-deployment
  labels:
    app: cube
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cube
  template:
    metadata:
      labels:
        app: cube
    spec:
      containers:
      - name: cube
        image: registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0
        ports:
        - containerPort: 80
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl apply -f cube.yaml

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get po
NAME                               READY   STATUS    RESTARTS   AGE
cube-deployment-6cd8bf8764-4zj6d   0/1     Pending   0          17s
cube-deployment-6cd8bf8764-6ww8h   0/1     Pending   0          17s
cube-deployment-6cd8bf8764-nxql6   0/1     Pending   0          17s

定位问题:节点为master
解决:删除污点即可

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl describe po cube-deployment-6cd8bf8764-4zj6d

报错信息如下:

Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  13s (x6 over 5m22s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get no
NAME                      STATUS   ROLES                  AGE   VERSION
izrj9i7i0z7jcwcck3kq2vz   Ready    control-plane,master   54m   v1.23.0
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl edit no izrj9i7i0z7jcwcck3kq2vz

解决:删除tains里面的三行

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get po
NAME                               READY   STATUS    RESTARTS   AGE
cube-deployment-6cd8bf8764-4zj6d   1/1     Running   0          13m
cube-deployment-6cd8bf8764-6ww8h   1/1     Running   0          13m
cube-deployment-6cd8bf8764-nxql6   1/1     Running   0          13m
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get po -owide
NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE                      NOMINATED NODE   READINESS GATES
cube-deployment-6cd8bf8764-4zj6d   1/1     Running   0          14m   192.168.19.8   izrj9i7i0z7jcwcck3kq2vz              
cube-deployment-6cd8bf8764-6ww8h   1/1     Running   0          14m   192.168.19.7   izrj9i7i0z7jcwcck3kq2vz              
cube-deployment-6cd8bf8764-nxql6   1/1     Running   0          14m   192.168.19.9   izrj9i7i0z7jcwcck3kq2vz              

接下来创建一个service
打开k8s官网搜索”service”

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# vim cube-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

修改为

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort                            //新增
  selector:
    app: cube                               //替换cube.yaml的labels
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80                       //修改

运行cube-svc的yaml文件

[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl apply -f cube-svc.yaml 
service/my-service created
[root@iZrj9i7i0z7jcwcck3kq2vZ ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1               443/TCP        71m
my-service   NodePort    10.101.251.27           80:31775/TCP   39s

最后通过公网ip+端口号(31775)运行即可
注意:页面无法访问请检查安全组中添加31775即可(或检查防火墙配置)


你可能感兴趣的:(服务器云原生docker)