kubesphere最小化安装

KubeSphere

默认的 dashboard 没啥用,我们用 kubesphere 可以打通全部的 devops 链路。
Kubesphere 集成了很多套件,集群要求较高
https://kubesphere.io/
Kuboard 也很不错,集群要求不高
https://kuboard.cn/support/

简介

KubeSphere 是一款面向云原生设计的开源项目,在目前主流容器调度平台 Kubernetes 之上构建的分布式多租户容器管理平台,提供简单易用的操作界面以及向导式操作方式,在降低用户使用容器调度平台学习成本的同时,极大降低开发、测试、运维的日常工作的复杂度。

安装

1.前提条件
https://kubesphere.io/docs/v2.1/zh-CN/installation/prerequisites

2.安装前提环境

​ 2.1安装 helm(master 节点执行)
Helm 是 Kubernetes 的包管理器。包管理器类似于我们在 Ubuntu 中使用的 apt、Centos中使用的 yum 或者 Python 中的 pip 一样,能快速查找、下载和安装软件包。Helm 由客户端组件 helm 和服务端组件 Tiller 组成, 能够将一组 K8S 资源打包统一管理, 是查找、共享和使用为 Kubernetes 构建的软件的最佳方式。
1)、安装

curl -SLO https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz

tar xzvf helm-v2.16.3-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/bin/

mv linux-amd64/tiller /usr/local/bin/

#验证成功
helm version

 2)、验证版本
 helm version
 3)、创建权限(master 执行)
 创建 helm-rbac.yaml,写入如下内容

helm_rbac.ymal

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin  
subjects: 
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

2、安装 Tiller(master 执行)

1、初始化

helm init --service-account=tiller --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --history-max 300

–tiller-image 指定镜像,否则会被墙。大家使用这个镜像比较好
jessestuart/tiller:v2.16.3

验证 helm 或 tiller

kubectl -n kube-system get pods|grep tiller

安装PV
查看是否有污点

kubectl describe node k8s-node1|grep Taint

去掉污点(注意:待安装完 kubesphere 之后再添加污点)

kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-

等待节点上部署的 tiller 完成即可

2、安装OpenEBS

#创建OpenEBS的namespace
kubectl create ns openebs
#使用helm安装
helm init
#换镜像地址
helm repo remove stable
helm repo add stable http://mirror.azure.cn/kubernetes/charts

helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

#改变storageclass
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'


3.最小化安装kubesphere
若您的集群可用的资源符合 CPU > 1 Core,可用内存 > 2 G,可以参考以下命令开启

#将下面文件内容编辑到yaml中
vi kubesphere-mini.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
data:
  ks-config.yaml: |
    ---
    persistence:
      storageClass: ""
    etcd:
      monitoring: False
      endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9
      port: 2379
      tlsEnable: True
    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi
    metrics_server:
      enabled: False
    console:
      enableMultiLogin: False  # enable/disable multi login
      port: 30880
    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: False
    logging:
      enabled: False
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: False
    openpitrix:
      enabled: False
    devops:
      enabled: False
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: False
        postgresqlVolumeSize: 8Gi
    servicemesh:
      enabled: False
    notification:
      enabled: False
    alerting:
      enabled: False
kind: ConfigMap
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v2.1.1
        imagePullPolicy: "Always"

#KubeSphere 最小化安装:
kubectl apply -f kubesphere-mini.yaml 

注意:需要等待 ks-install pod 创建并Running后,才能看到相关日志信息

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

最后显示如下信息表示安装完成

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-R5jOraCD-1670924684010)(C:\Users\HHM\AppData\Roaming\Typora\typora-user-images\image-20221207134619336.png)]

查看所有 Pod 状态,必须全部为 Running,否则还没有安装完成

kubectl get pods --all-namespaces

所有 Pod 状态必须为 Running,使用 IP:30880访问 KubeSphere UI 界面,默认的集群管理员账号为 admin/P@88w0rd。

最后安装成功后,添加污点

[root@k8s-node1 local]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule

安装过程中常见问题

1. pod 状态为 ImagePullBackOffCrashLoopBackOffPending

表示网络原因造成镜像拉取失败,或者创建不成功,只能等待。如果一天还未完成,可能已经无法正常继续安装了,解决办法:重新安装

2. workspace failed,造成安装停止,不能继续进行

说明:此日志出现在安装 kubesphere 最后阶段,从日志上看是工作空间安装失败,类似还有系统空间、kubesphere配置等组件;
解决办法:重新安装(kubectl rollout restart deploy -n kubesphere-system ks-installer)

3. prometheus failed,导致管理平台监控信息都为0

说明:此现象是安装成功,管理平台也能登录,但监控数据为0,如果细心留意了当时的安装日志,里面可能 会发现相关安装组件失败的信息,可通过如下命令查看 prometheuses

kubectl get prometheuses -n kubesphere-monitoring-system
#卸载
#删除旧的helm,直接执行
helm reset -f

#删除
rm -rf /root/.helm

# 删除 与tiller相关的secrets,sa,clusterrolebinding
kubectl get -n kube-system secrets,sa,clusterrolebinding -o name|grep tiller|xargs kubectl -n kube-system delete

# 删除 与helm客户端相关的资源
kubectl get all -n kube-system -l app=helm -o name|xargs kubectl delete -n kube-system



用户:hhm-hr密码Hhm123456

用户:admin密码:P@88w0rd

kubesphere最小化安装_第1张图片

创建凭证没有SonarQube

SonarQube 账号admin 密码hhmadmin

token:3632d7234aee93f5d33863a99aa1309914046735

# 查看k8s集群中是否有SonarQube 
kubectl get svc -n kubesphere-devops-system | grep sonarqube-sonarqube

helm version
# 
helm upgrade --install sonarqube sonarqube --repo https://charts.kubesphere.io/main -n kubesphere-devops-system  --create-namespace --set service.type=NodePort
# 上面命令执行出错,查阅发现是需要helm版本是3
# 安装helm3,打不开就手动创建文件
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash


[root@k8s-node1 vagrant]# helm upgrade --install sonarqube sonarqube --repo https://charts.kubesphere.io/main -n kubesphere-devops-system  --create-namespace --set service.type=NodePort
Release "sonarqube" does not exist. Installing it now.
NAME: sonarqube
LAST DEPLOYED: Sun Oct 10 08:07:24 2021
NAMESPACE: kubesphere-devops-system
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace kubesphere-devops-system -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
  export NODE_IP=$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
  
[root@k8s-node1 vagrant]#   export NODE_PORT=$(kubectl get --namespace kubesphere-devops-system -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
[root@k8s-node1 vagrant]#   export NODE_IP=$(kubectl get nodes --namespace kubesphere-devops-system -o jsonpath="{.items[0].status.addresses[0].address}")
[root@k8s-node1 vagrant]#   echo http://$NODE_IP:$NODE_PORT
http://192.168.56.100:30276


kubectl get pod -n kubesphere-devops-system | grep sonarqube-sonarqube

你可能感兴趣的:(kubernetes,devops,运维)