2019独角兽企业重金招聘Python工程师标准>>>
k8s简单部署操作实录
1.设置主机主机名############################################## hostnamectl set-hostname k8s-master;bash hostnamectl set-hostname k8s-node1;bash hostnamectl set-hostname k8s-node2;bash 2.三台服务器分别添加主机名解析################################### cat >> /etc/hosts << EOF 192.168.0.124 k8s-master 192.168.0.164 k8s-node1 192.168.0.165 k8s-node2 EOF 3.三台服务器分别执行以下语句进行免秘钥登录######################### ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa for i in 192.168.0.124 192.168.0.164 192.168.0.165 k8s-master k8s-node1 k8s-node2;do expect -c " spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i expect { \"*yes/no*\" {send \"yes\r\"; exp_continue} \"*password*\" {send \"123456\r\"; exp_continue} \"*Password*\" {send \"123456\r\";} } " done ############################################################# 4.部署Etcd集群使用cfssl来生成自签证书,先下载cfssl工具: (1)安装cfssl工具 mkdir -p /opt/{cfssl,etcd-ssl} && cd /opt/cfssl && wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 && chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo (2)创建以下三个文件: #创建ca-config.json文件 cd /opt/etcd-ssl cat>ca-config.json<{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF #创建ca-csr.json文件 cat >ca-csr.json< #mkdir -pv /opt/etcd/ssl/ #scp /opt/etcd/ssl/* k8s-node2:/opt/etcd/ssl/ 重启flannel和docker: systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl status flanneld systemctl restart docker 检查是否生效: [root@k8s-master etcd-ssl]# ps -ef |grep docker root 8040 1 0 10:47 ? 00:00:00 /usr/bin/dockerd --bip=172.17.47.1/24 --ip-masq=false --mtu=1450 root 8196 5383 0 10:48 pts/0 00:00:00 grep --color=auto docker 部署Flannel实现容器之间在同一网段且能够互相通信。 测试:查看M,S,S,Flannel 网络ip通过ping命令测试是否可以互通 如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel 查看IP [root@k8s-master etcd-ssl]# ssh k8s-master 'hostname -I' 192.168.0.124 172.17.47.1 172.17.47.0 [root@k8s-master etcd-ssl]# ssh k8s-node1 'hostname -I' 192.168.0.164 172.17.100.1 172.17.100.0 [root@k8s-master etcd-ssl]# ssh k8s-node2 'hostname -I' 192.168.0.165 172.17.22.1 172.17.22.0 Ping互通性测试 [root@k8s-master etcd-ssl]# ping 172.17.47.1 PING 172.17.47.1 (172.17.47.1) 56(84) bytes of data. [root@k8s-master etcd-ssl]# ping 172.17.100.1 64 bytes from 172.17.100.1: icmp_seq=1 ttl=64 time=0.204 ms [root@k8s-master etcd-ssl]# ping 172.17.22.1 64 bytes from 172.17.22.1: icmp_seq=2 ttl=64 time=0.131 ms #ETCD集群与Node节点中的Docker与Flannel网络部署完毕 部署master节点其他组件 在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。 生成证书创建CA证书: mkdir -p /opt/kuber-ssl && cd /opt/kuber-ssl cat >ca-config.json<{ "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF #创建server-csr.json文件并修改etcd集群的主机IP地址 cat>server-csr.json< sed -i 's/192.168.135.128/192.168.0.124/g' server-csr.json sed -i 's/192.168.135.129/192.168.0.164/g' server-csr.json sed -i 's/192.168.135.130/192.168.0.165/g' server-csr.json cat server-csr.json 生成证书: cfssl gencert -initca ca-csr.json | cfssljson -bare ca - && cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server [root@k8s-master etcd-ssl]# ls *pem ca-key.pem ca.pem server-key.pem server.pem 安装Etcd二进制包下载地址: https://github.com/coreos/etcd/releases/tag/v3.2.12 以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的: 解压二进制包: mkdir /opt/etcd/{bin,cfg,ssl,tools} -p cd /opt/etcd/tools wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz tar zxvf etcd-v3.2.12-linux-amd64.tar.gz mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ 创建etcd配置文件: cat >/opt/etcd/cfg/etcd<#修改ip地址 systemd管理etcd: cat >/usr/lib/systemd/system/etcd.service< 把刚才生成的证书拷贝到配置文件中的位置: cd /opt/etcd-ssl cp ca*pem server*pem /opt/etcd/ssl yum install -y rsync rsync -avzP /opt/* k8s-node1:/opt/ rsync -avzP /opt/* k8s-node2:/opt/ rsync -avzP /usr/lib/systemd/system/etcd.service k8s-node1:/usr/lib/systemd/system/ rsync -avzP /usr/lib/systemd/system/etcd.service k8s-node2:/usr/lib/systemd/system/ #注意etcd.server文件发送过去后检查文件格式是否OK。 手动修改:ETCD_INITIAL_CLUSTER地址。其他可通过命令修改 第一节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.124/g}' /opt/etcd/cfg/etcd 第二节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.164/g}' /opt/etcd/cfg/etcd sed -i '/NAME/{s/etcd01/etcd02/g}' /opt/etcd/cfg/etcd 第三节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.165/g}' /opt/etcd/cfg/etcd sed -i '/NAME/{s/etcd01/etcd03/g}' /opt/etcd/cfg/etcd 修改后cat 确认配置文件是否修改正确: cat /opt/etcd/cfg/etcd 在每个节点执行以下语句开启etcd服务 systemctl daemon-reload systemctl start etcd systemctl enable etcd systemctl status etcd 都部署完成后,检查etcd集群状态: [root@k8s-master etcd-ssl]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379" cluster-health 出现以下内容表示集群创建成功;集群创建失败提示:网络超时等.请检查防火墙是否开启,关闭防火墙再次进行测试 member 644c5469087216c8 is healthy: got healthy result from https://192.168.0.124:2379 member 7f51f4cdf2e7f45d is healthy: got healthy result from https://192.168.0.165:2379 member 9e279b14d0a43431 is healthy: got healthy result from https://192.168.0.164:2379 cluster is healthy 5.在Node节点安装Docker yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce -y curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io systemctl start docker systemctl enable docker 6.部署Flannel网络,master(可以不安装),node节点都可以安装 Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段: /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' 下载二进制包: wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar zxvf flannel-v0.10.0-linux-amd64.tar.gz mkdir -pv /opt/kubernetes/bin mv flanneld mk-docker-opts.sh /opt/kubernetes/bin 配置Flannel: mkdir -pv /opt/kubernetes/cfg/ cat>/opt/kubernetes/cfg/flanneld< systemd管理Flannel: cat> /usr/lib/systemd/system/flanneld.service< 配置Docker启动指定子网段: cat>/usr/lib/systemd/system/docker.service< #注意flanel需要ETCD连接证书,需要保证其他节点都有ETCD证书。前面我们rynsc推送了opt所以一下操作无需操作 { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat >ca-csr.json< { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - echo $? 配置生成apiserver证书的配置文件: cat >server-csr.json< { "CN": "kubernetes", "hosts": [ "10.0.0.1",//这是后面dns要使用的虚拟网络的网关,不用改,就用这个 切忌(删除这行) "127.0.0.1", "192.168.236.128", "192.168.236.129", "192.168.236.130", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF sed -i 's/192.168.236.128/192.168.0.124/g' server-csr.json sed -i 's/192.168.236.129/192.168.0.164/g' server-csr.json sed -i 's/195.168.236.130/192.168.0.165/g' server-csr.json #生成api-server证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server echo $? 配置kube-proxy证书生成文件: cat>kube-proxy-csr.json< : cat> /usr/lib/systemd/system/kube-controller-manager.service<生成kube-proxy证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy echo $? 最终生成以下证书文件: ls *pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem 7.部署apiserver组件 下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md 下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。 #wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz#需要 mkdir -p /opt/kubernetes/{bin,cfg,ssl,tools} -pv && cd /opt/kubernetes/tools/ wget http://resource.bestyunyan.club//server/tgz/kubernetes-server-linux-amd64.tarkubernetes-server-linux-amd64.tar.gz #tar zxvf kubernetes-server-linux-amd64.tar.gz tar -xvzf kubernetes-server-linux-amd64.tarkubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin 从生成证书的机器拷贝证书到master1,master2: cd /opt/kuber-ssl && cp server.pem server-key.pem ca.pem ca-key.pem /opt/kubernetes/ssl/ # scp server.pem server-key.pem ca.pem ca-key.pem k8s-master1:/opt/kubernetes/ssl/ # scp server.pem server-key.pem ca.pem ca-key.pem k8s-master2:/opt/kubernetes/ssl/ 创建token文件,后面会讲到: cat >/opt/kubernetes/cfg/token.csv< /opt/kubernetes/cfg/kube-apiserver < /usr/lib/systemd/system/kube-apiserver.service< 8.部署schduler组件 创建schduler配置文件: cat> /opt/kubernetes/cfg/kube-scheduler < /usr/lib/systemd/system/kube-scheduler.service< 9.部署controller-manager组件 创建controller-manager配置文件: cat >/usr/lib/systemd/system/kube-controller-manager.service< /opt/kubernetes/cfg/kube-controller-manager< systemd管理controller-manager组件 启动: systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager 查看所有服务状态 systemctl status etcd |awk '/Active/{print $3}' systemctl status kube-apiserver|awk '/Active/{print $3}' systemctl status kube-scheduler |awk '/Active/{print $3}' systemctl status kube-controller-manager|awk '/Active/{print $3}' 表示OK /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} ----------------------下面这些操作在master节点完成:--------------------------- 将kubelet-bootstrap用户绑定到系统集群角色 /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap 创建kubeconfig文件: 在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件: cd /opt/kuber-ssl 指定apiserver 内网负载均衡地址 KUBE_APISERVER="https://192.168.0.124:6443" BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc # 设置集群参数 /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 创建kube-proxy kubeconfig文件 /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig ls bootstrap.kubeconfig kube-proxy.kubeconfig 将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下。 scp *.kubeconfig k8s-node1:/opt/kubernetes/cfg/ scp *.kubeconfig k8s-node2:/opt/kubernetes/cfg/ ----------------------下面这些操作在node节点完成:--------------------------- 10.部署kubelet组件,在master节点执行rsync推送命令至node节点 将前面下载的二进制包中的kubelet和kube-proxy拷贝到/opt/kubernetes/bin目录下。 rsync -avzP /opt/kubernetes/tools/kubernetes/server/bin/* k8s-node1:/opt/kubernetes/bin/ rsync -avzP /opt/kubernetes/tools/kubernetes/server/bin/* k8s-node2:/opt/kubernetes/bin/ 创建kubelet配置文件: cat >/opt/kubernetes/cfg/kubelet< /opt/kubernetes/cfg/kubelet.config< /usr/lib/systemd/system/kubelet.service< 1h v1.11.6 192.168.236.130 Ready 2m v1.11.6 11.部署kube-proxy组件创建kube-proxy配置文件: cat >/opt/kubernetes/cfg/kube-proxy< /usr/lib/systemd/system/kube-proxy.service< 2h v1.11.6 192.168.236.130 Ready 46m v1.11.6 /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} 12.运行一个测试示例 创建一个Nginx Web,判断集群是否正常工作: /opt/kubernetes/bin/kubectl run nginx --image=nginx --replicas=3 /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort 查看Pod,Service: /opt/kubernetes/bin/kubectl get pods NAME READY STATUS RESTARTS AGE nginx-64f497f8fd-fjgt2 1/1 Running 3 28d nginx-64f497f8fd-gmstq 1/1 Running 3 28d nginx-64f497f8fd-q6wk9 1/1 Running 3 28d 查看pod详细信息: /opt/kubernetes/bin/kubectl describe nginx-64f497f8fd-fjgt2 /opt/kubernetes/bin/kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 27m nginx NodePort 10.0.0.9 88:37817/TCP 39s 对外端口为37817 访问任意node节点ip+port即可。k8s自动实现node节点pod的负载均衡 测试: node1节点:操作 docker ps CONTAINER ID IMAGE 209301338ab9 nginx hostname -I 192.168.0.164 172.17.100.1 172.17.100.0 echo "我是nginx-164-node1" >index.html docker cp ./index.html 20:/usr/share/nginx/html/ node2节点:操作 [root@k8s-node2 etcd-ssl]# hostname -I 192.168.0.165 172.17.22.1 172.17.22.0 [root@k8s-node2 etcd-ssl]# docker ps CONTAINER ID IMAGE 7495e46b89ff nginx a2484aa751d5 nginx echo "我是nginx-165-node2-1">index.html docker cp ./index.html 74:/usr/share/nginx/html/ echo "我是nginx-165-node2-2">index.html docker cp ./index.html a2:/usr/share/nginx/html/ 结果显示 [root@k8s-master kuber-ssl]# for i in `seq 10`;do curl 192.168.0.164:37817 >>./test.txt;sleep 1;done 2>/dev/null && cat test.txt 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-164-node1 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-165-node2-1 我是nginx-164-node1 我是nginx-165-node2-1 13.k8s-UI-Dashboard安装部署 由于默认镜像在谷歌所以这里用国内镜像源建议在每个node先将镜像pod下来 docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 在master下载yaml文件:(已经修改过的版本) mkdir -p /opt/yaml && cd /opt/yaml wget http://resource.bestyunyan.club//server/yaml/kubernetes-dashboard.yamlkubernetes-dashboard.yaml #文件内部需要修改的地方标记为红色 完整版本: cat > kubernetes-dashboard.yaml << EOF # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI compatible with # Kubernetes 1.8. # # Example usage: kubectl create -f # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 selector: k8s-app: kubernetes-dashboard EOF #添加nodeport方式访问并设置端口为30000 #创建rbac授权yaml cat > dashboard-admin.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF 创建dashboard.yaml和rbac kubectl create -f kubernetes-dashboard.yaml kubectl create -f dashboard-admin.yaml 查看pods [root@k8s-master key]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default nginx-64f497f8fd-6ksl9 1/1 Running 0 2h 172.17.1.2 192.168.0.164 default nginx-64f497f8fd-87scv 1/1 Running 0 2h 172.17.46.2 192.168.0.165 default nginx-64f497f8fd-r2pj6 1/1 Running 0 2h 172.17.46.3 192.168.0.165 kube-system kubernetes-dashboard-b644d546b-ftpb9 1/1 Running 0 19m 172.17.1.3 192.168.0.164 默认情况下部署成功后可以直接访问 https://NODE_IP:配置的端口 访问,但是想要登录进去查看的话需要使用 kubeconfig 或者 access token 的方式; 这里选择access token 生成token并复制 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token Name: admin-user-token-6gk2h Type: kubernetes.io/service-account-token token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnazJoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2M2JlYzIzYS03YzY5LTExZTktODc4MS0wMDBjMjk0NjFjYjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.OFm-xaTL4eiRDGP44PUVVEViSNCeDlswboATLfZ3YUW7VACaHqFcZRnr6t-2Wp_jCgeJ6HldBE52KS43LSFISKlV4YfJ62KPKV-D4l9BLM4uXDal3dFn7Xc9cK7fa1S7zbkWCqVs97Q51YWTtf0tOpPCcfIkcBTrnyswmiyP6EUA9qt9vM4qnrqUuLQSeuEqUAzjrPnAYzWt5z_zjinjDv0S3yXiqnHP0mbjkwQFeA8C_4m6jrWm2jxTPDlms1QPQ5WrP3hyWGHKKyDN_CORGoUwG8CW37QD46WI637TB8iyq5-rbGJRuUC17DJ_F5uGFp0ntDABO_1yCPEX1HuTpQ 浏览器访问选择token粘贴进入 浏览器访问出现您的链接不是私密链接等警告时参考https://www.jianshu.com/p/40c0405811ee进行处理