K8S 基于本地存储的一主一从 MySQL 架构

为了实现一个简单的基于本地存储的一主一从 MySQL 架构,我们可以按照以下步骤来配置 Persistent Volume (PV)、Persistent Volume Claim (PVC) 以及 MySQL 的一主一从部署。

步骤 1: 创建 PV 和 PVC

1、创建 PV YAML 文件: 创建一个名为 local-pv.yaml 的文件,内容如下:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  hostPath:
    path: /mnt/data/mysql
    type: DirectoryOrCreate

2、创建 StorageClass YAML 文件: 创建一个名为 local-storageclass.yaml 的文件,内容如下:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true

注意,我们在这里将 reclaimPolicy 设置为 Retain 以避免与 PV 中的 persistentVolumeReclaimPolicy 冲突。
3、创建 PVC YAML 文件: 创建一个名为 mysql-pvc.yaml 的文件,内容如下:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage

步骤 2: 应用配置

1、应用 PV: 使用 kubectl 应用 PV 配置。

kubectl apply -f local-pv.yaml

2、应用 StorageClass: 使用 kubectl 应用 StorageClass 配置。

kubectl apply -f local-storageclass.yaml

Create the Namespace:

kubectl create namespace mysql

3、应用 PVC: 使用 kubectl 应用 PVC 配置。

kubectl apply -f mysql-pvc.yaml

步骤 3: 部署 MySQL 一主一从架构

1、创建 Deployment YAML 文件: 创建一个名为 mysql-deployment.yaml 的文件,内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-deployment
  namespace: mysql
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mysql-persistent-storage
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc

2、创建 Service YAML 文件: 创建一个名为 mysql-service.yaml 的文件,内容如下:

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: mysql
spec:
  ports:
  - port: 3306
    targetPort: 3306
  selector:
    app: mysql
  clusterIP: None

步骤 4: 应用 Deployment 和 Service

1、应用 Deployment: 使用 kubectl 应用 Deployment 配置。

kubectl apply -f mysql-deployment.yaml

2、应用 Service: 使用 kubectl 应用 Service 配置。

kubectl apply -f mysql-service.yaml

BUG1 怀疑是pod 以及pvc 的问题

pod一直起不来

[root@kube-master ~]# kubectl get pods -A -o wide
NAMESPACE      NAME                                        READY   STATUS             RESTARTS          AGE     IP              NODE                NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-6dhkd                       1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
kube-system    coredns-7db6d8ff4d-c4ln5                    1/1     Running            0                 3d16h   10.244.0.3      kube-master.local   <none>           <none>
kube-system    coredns-7db6d8ff4d-d4sb7                    1/1     Running            0                 3d16h   10.244.0.2      kube-master.local   <none>           <none>
kube-system    etcd-kube-master.local                      1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
kube-system    kube-apiserver-kube-master.local            1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
kube-system    kube-controller-manager-kube-master.local   1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
kube-system    kube-proxy-d4vhd                            1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
kube-system    kube-scheduler-kube-master.local            1/1     Running            0                 3d16h   172.19.19.134   kube-master.local   <none>           <none>
mysql          mysql-deployment-57f94cdc84-nbkwg           1/1     Running            2 (3d15h ago)     3d16h   10.244.0.5      kube-master.local   <none>           <none>
mysql          mysql-deployment-57f94cdc84-wwf8v           0/1     CrashLoopBackOff   779 (4m21s ago)   3d16h   10.244.0.4      kube-master.local   <none>           <none>
[root@kube-master ~]# 

看一下这个pod的具体情况

[root@kube-master ~]# 
[root@kube-master ~]# kubectl describe pod mysql-deployment-57f94cdc84-wwf8v -n mysql
Name:             mysql-deployment-57f94cdc84-wwf8v
Namespace:        mysql
Priority:         0
Service Account:  default
Node:             kube-master.local/172.19.19.134
Start Time:       Wed, 21 Aug 2024 00:47:41 +0800
Labels:           app=mysql
                  pod-template-hash=57f94cdc84
Annotations:      <none>
Status:           Running
IP:               10.244.0.4
IPs:
  IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-deployment-57f94cdc84
Containers:
  mysql:
    Container ID:   containerd://3815109a74807da3e3ed5126aa35a09d7ca3130082edb7fa00d6be7d0f68d03c
    Image:          mysql:5.7
    Image ID:       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 24 Aug 2024 16:50:12 +0800
      Finished:     Sat, 24 Aug 2024 16:51:54 +0800
    Ready:          False
    Restart Count:  780
    Environment:
      MYSQL_ROOT_PASSWORD:  MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcbft (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  mysql-persistent-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pvc
    ReadOnly:   false
  kube-api-access-vcbft:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                        From     Message
  ----     ------   ----                       ----     -------
  Warning  BackOff  4m33s (x18217 over 3d16h)  kubelet  Back-off restarting failed container mysql in pod mysql-deployment-57f94cdc84-wwf8v_mysql(9f183d96-3de1-400f-b2c3-6ed70feecfae)
[root@kube-master ~]# 

看一下这个pod的日志

kubectl logs mysql-deployment-57f94cdc84-wwf8v -n mysql -c mysql

2024-08-24T08:51:46.607281Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:46.607306Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:47.607409Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:47.607433Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:48.607543Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:48.607567Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:49.607665Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:49.607691Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:50.607789Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:50.607811Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:51.607910Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:51.607936Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:52.608041Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:52.608065Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:53.608163Z 0 [ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
2024-08-24T08:51:53.608185Z 0 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2024-08-24T08:51:53.608192Z 0 [Note] InnoDB: Unable to open the first data file
2024-08-24T08:51:53.608221Z 0 [ERROR] InnoDB: Operating system error number 11 in a file operation.
2024-08-24T08:51:53.608263Z 0 [ERROR] InnoDB: Error number 11 means 'Resource temporarily unavailable'
2024-08-24T08:51:53.608270Z 0 [Note] InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
2024-08-24T08:51:53.608275Z 0 [ERROR] InnoDB: Cannot open datafile './ibdata1'
2024-08-24T08:51:53.608282Z 0 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!
2024-08-24T08:51:53.608295Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Cannot open a file
2024-08-24T08:51:54.208738Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2024-08-24T08:51:54.208779Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2024-08-24T08:51:54.208794Z 0 [ERROR] Failed to initialize builtin plugins.
2024-08-24T08:51:54.208799Z 0 [ERROR] Aborting

2024-08-24T08:51:54.208940Z 0 [Note] Binlog end
2024-08-24T08:51:54.209149Z 0 [Note] Shutting down plugin 'CSV'
2024-08-24T08:51:54.209753Z 0 [Note] mysqld: Shutdown complete

[root@kube-master ~]# 

检查数据卷的状态

确认持久化卷 mysql-pvc 的状态是否正常

[root@kube-master ~]# kubectl describe pvc mysql-pvc -n mysql
Name:          mysql-pvc
Namespace:     mysql
StorageClass:  local-storage
Status:        Bound
Volume:        local-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       mysql-deployment-57f94cdc84-nbkwg
               mysql-deployment-57f94cdc84-wwf8v
Events:        <none>
[root@kube-master ~]# 

解决方案

从持久化卷 mysql-pvc 的描述信息来看,该卷的状态为 Bound,并且已经绑定到了两个 Pod 上:

mysql-deployment-57f94cdc84-nbkwg
mysql-deployment-57f94cdc84-wwf8v
这意味着同一个持久化卷被绑定到了两个不同的 Pod 上,这可能是导致问题的原因之一。通常情况下,对于 MySQL 这样的数据库服务,持久化卷应该只绑定到一个 Pod 上,以避免并发访问导致的数据不一致问题。

解决方案
检查 Deployment 的副本数
确认 Deployment 的副本数是否设置得过大,导致多个 Pod 同时使用同一个持久化卷。

检查 Deployment 的滚动更新策略
如果 Deployment 的滚动更新策略设置不当,可能导致旧的 Pod 和新的 Pod 同时存在,进而导致多个 Pod 同时使用同一个持久化卷。

清理多余的 Pod 和重新部署
清理多余的 Pod,并重新部署 Deployment,确保只有一个 Pod 使用持久化卷。
1、检查 Deployment 的副本数
kubectl describe deployment mysql-deployment -n mysql

2、确认 Deployment 的滚动更新策略是否设置正确:
kubectl describe deployment mysql-deployment -n mysql

3、清理多余的 Pod,并重新部署 Deployment,确保只有一个 Pod 使用持久化卷:

删除多余的 Pod:
kubectl delete pod mysql-deployment-57f94cdc84-nbkwg -n mysql

重新部署 Deployment:
kubectl rollout restart deployment/mysql-deployment -n mysql
1、调整 Deployment 的副本数
将副本数调整为 1,以确保只有一个 Pod 使用持久化卷:
kubectl scale deployment/mysql-deployment --replicas=1 -n mysql

2、清理多余的 Pod,并重新部署 Deployment,确保只有一个 Pod 使用持久化卷:

删除多余的 Pod:
kubectl delete pod mysql-deployment-57f94cdc84-nbkwg -n mysql

重新部署 Deployment:
kubectl rollout restart deployment/mysql-deployment -n mysql
[root@kube-master ~]# 
[root@kube-master ~]# kubectl describe deployment mysql-deployment -n mysql
Name:                   mysql-deployment
Namespace:              mysql
CreationTimestamp:      Wed, 21 Aug 2024 00:42:38 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=mysql
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       app=mysql
  Annotations:  kubectl.kubernetes.io/restartedAt: 2024-08-24T17:14:32+08:00
  Containers:
   mysql:
    Image:      mysql:5.7
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
      MYSQL_ROOT_PASSWORD:  MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG
    Mounts:
      /var/lib/mysql from mysql-persistent-storage (rw)
  Volumes:
   mysql-persistent-storage:
    Type:          PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:     mysql-pvc
    ReadOnly:      false
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  mysql-deployment-57f94cdc84 (0/0 replicas created)
NewReplicaSet:   mysql-deployment-7fc6f5694b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  16m   deployment-controller  Scaled down replica set mysql-deployment-57f94cdc84 to 1 from 2
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set mysql-deployment-7fc6f5694b to 1
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled down replica set mysql-deployment-57f94cdc84 to 0 from 1
[root@kube-master ~]# 
[root@kube-master ~]# 
[root@kube-master ~]# 
使用navicat连接数据库时:
1130 - Host 10.244.0.1' is not allowed to connect to this MySQL server
1、Connect to the MySQL Pod: First, connect to the MySQL pod:

kubectl exec -it $(kubectl get pods -n mysql -l app=mysql -o jsonpath='{.items[0].metadata.name}') -n mysql -- mysql -u root -p

2、Grant Permissions: Once connected to the MySQL server, grant the necessary permissions to the user. Replace root with the appropriate username if needed.

-- Grant permissions to the root user from any host
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG' WITH GRANT OPTION;

-- Flush the privileges to ensure that they are applied
FLUSH PRIVILEGES;

步骤 5: 配置主从复制(由于只有一个pod,就没办法主从复制了)

1、登录到主数据库容器: 使用 kubectl 登录到主数据库容器。

kubectl exec -it $(kubectl get pods -n mysql -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -u root -p    

kubectl exec -it mysql-deployment-57f94cdc84-nbkwg -n mysql -- mysql -u root -p

MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG

2、配置主数据库: 在 MySQL 命令行中执行以下命令:

STOP SLAVE;
RESET MASTER;
SHOW MASTER STATUS;

记录输出中的 File 和 Position,稍后用于配置从数据库。

3、退出 MySQL 命令行: 输入 \q 退出 MySQL 命令行。

4、登录到从数据库容器: 使用 kubectl 登录到从数据库容器

kubectl exec -it $(kubectl get pods -n mysql -l app=mysql -o jsonpath='{.items[1].metadata.name}') -- mysql -u root -p
kubectl exec -it mysql-deployment-57f94cdc84-wwf8v -n mysql -- mysql -u root -p

MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG

5、配置从数据库: 在 MySQL 命令行中执行以下命令:

STOP SLAVE;
CHANGE MASTER TO MASTER_HOST='mysql-service', MASTER_USER='root', MASTER_PASSWORD='MMmm@23$$3##HHhjj$35!HJKGFgjhsswbGFHJ4dfrfrFGHHHG', MASTER_LOG_FILE='binlog.000001', MASTER_LOG_POS=4;
START SLAVE;
SHOW SLAVE STATUS \G

这里假设主数据库的 File 为 binlog.000001,Position 为 4。您需要根据实际情况替换这些值。
6、验证从数据库状态: 检查从数据库的状态,确保复制正常工作。

SHOW SLAVE STATUS \G

通过上述步骤,您应该能够成功配置基于本地存储的一主一从 MySQL 架构。这里的关键是正确配置 PV 和 PVC 以使用本地存储,并确保主从数据库之间的复制配置正确。

你可能感兴趣的:(kubernetes,mysql,架构)