k8s部署MySQL集群

步骤

一、镜像版本

  • MySQL:8.0.18
  • xtrabackup:8.0.9

mysql 和 xtrabackup 版本必须对应!!!

参考官网文档说明

制作 xtrabackup 镜像(可选)

FROM centos:7
 
ADD percona-xtrabackup-80-8.0.9-1.el7.x86_64.rpm /
 
RUN rpm --rebuilddb && \
    yum -y install wget hostname mariadb && \
    yum -y install nmap-ncat.x86_64 && \
    yum -y localinstall percona-xtrabackup-80-8.0.9-1.el7.x86_64.rpm
RUN rm -rf percona-xtrabackup-80-8.0.9-1.el7.x86_64.rpm
 
EXPOSE 3307

二、MySQL集群搭建(一主二从)

1、部署 NFS-SERVER

# 在部署节点上安装nfs
yum -y install nfs-utils

# 创建nfs挂载目录
mkdir -p /nfs/mysql

#增加nfs配置
echo '/nfs/mysql *(rw,sync,no_root_squash)' >> /etc/exports

#重启nfs服务
systemctl restart rpcbind.service
systemctl restart nfs-utils.service 
systemctl restart nfs-server.service 

# 增加NFS-SERVER开机自启动
systemctl enable nfs-server.service 

# 验证NFS-SERVER是否能正常访问
showmount -e 10.10.10.90
# 输出是下面这样就成功
# Export list for 10.10.10.90:
# /net/mysql *

2、创建 StorageClass 动态存储(动态生成pv)

apiVersion: v1
kind: ServiceAccount
metadata:

  name: nfs-client-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: vbouchaud/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 10.10.10.90   # 填入NFS的地址
            - name: NFS_PATH
              value: /nfs/mysql   # 填入NFS挂载的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.10.10.90   # 填入NFS的地址
            path: /nfs/mysql   # 填入NFS挂载的目录

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: nfs-provisioner-01
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Retain

3、创建 ConfigMap 配置字典

apiVersion: v1

kind: ConfigMap

metadata:

  name: mysql

  labels:

    app: mysql

data:

  primary.cnf: |

    # 主节点应用这个配置
    [mysqld]

    log-bin

    default_authentication_plugin= mysql_native_password
  replica.cnf: |

    # 从节点应用这个配置
    [mysqld]

    super-read-only

    default_authentication_plugin= mysql_native_password




4、创建 Secret 为集群配置密码

# Secret 为 mysql集群配置密码
apiVersion: v1

kind: Secret
metadata:

  name: mysql-secret
  labels:

    app: mysql

type: Opaque # Opaque类型,data里面的值必须填base64加密后的内容
data:
  password: Y1dAY3do # base64加密后的密码 echo -n 'cW@cwh' |base64

5、创建 Service

需要创建一个无头服务(Headless Service)来控制网络域名

# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1

kind: Service
metadata:

  name: mysql
  labels:

    app: mysql

spec:
  clusterIP: None
  ports:
  - port: 3306
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the primary: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    nodePort: 30036
  selector:
    app: mysql
  type: NodePort
---
# 提供外部连接主节点
apiVersion: v1
kind: Service
metadata:
  name: mysql-readwrite
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
    nodePort: 30306
  selector:
    statefulset.kubernetes.io/pod-name: mysql-0 
  type: NodePort


6、创建 StatefulSet 搭建 MySQL 集群

apiVersion: apps/v1
kind: StatefulSet
metadata:

  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
        - name: init-mysql
          image: mysql:8.0.18
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          command:
            - bash
            - "-c"
            - |
              set -ex
              # 从 Pod 的序号,生成 server-id
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # 由于 server-id 不能为 0 ,因此给 ID 加 100 来避开它
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # 如果 Pod 的序号为 0 ,说明它是 Master 节点,从 ConfigMap 里把 Master 的配置文件拷贝到 /mnt/conf.d 目录下
              # 否则,拷贝 ConfigMap 里的 Slave 的配置文件
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/primary.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/replica.cnf /mnt/conf.d/
              fi
          volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
        - name: clone-mysql
          image: jstang/xtrabackup:2.3
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          command:
            - bash
            - "-c"
            - |
              set -ex
              # 拷贝操作只需要在第一次启动时进行,所以数据已经存在则跳过
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Master 节点(序号为 0 )不需要这个操作
              [[ $(hostname) =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal == 0 ]] && exit 0
              # 使用 ncat 指令,远程地从前一个节点拷贝数据到本地
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # 执行 --prepare ,这样拷贝来的数据就可以用作恢复了
              xtrabackup --prepare --target-dir=/var/lib/mysql
          volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
      containers:
        - name: mysql
          image: mysql:8.0.18
          env:
            # - name: MYSQL_ALLOW_EMPTY_PASSWORD
            # value: "1"
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          ports:
            - name: mysql
              containerPort: 3306
          volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          resources:
            requests:
              cpu: 500m
              memory: 1Gi
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - "-c"
                - MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
                - mysqladmin ping
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
          readinessProbe:
            exec:
              # Check we can execute queries over TCP (skip-networking is off).
              command:
                - /bin/sh
                - "-c"
                - MYSQL_PWD="${MYSQL_ROOT_PASSWORD}"
                - mysql -h 127.0.0.1 -u root -e "SELECT 1"
            initialDelaySeconds: 5
            periodSeconds: 2
            timeoutSeconds: 1
        - name: xtrabackup
          image: jstang/xtrabackup:2.3
          ports:
            - name: xtrabackup
              containerPort: 3307
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql

              # 从备份信息文件里读取 MASTER_LOG_FILE 和 MASTER_LOG_POS 这 2 个字段的值,用来拼装集群初始化 SQL
              if [[ -f xtrabackup_slave_info ]]; then
                # 如果 xtrabackup_slave_info 文件存在,说明这个备份数据来自于另一个 Slave 节点
                # 这种情况下,XtraBackup 工具在备份的时候,就已经在这个文件里自动生成了 "CHANGE MASTER TO" SQL 语句
                # 所以,只需要把这个文件重命名为 change_master_to.sql.in,后面直接使用即可
                mv xtrabackup_slave_info change_master_to.sql.in
                # 所以,也就用不着 xtrabackup_binlog_info 了
                rm -f xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # 如果只是存在 xtrabackup_binlog_info 文件,说明备份来自于 Master 节点,就需要解析这个备份信息文件,读取所需的两个字段的值
                [[ $(cat xtrabackup_binlog_info) =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm xtrabackup_binlog_info
                # 把两个字段的值拼装成 SQL,写入 change_master_to.sql.in 文件
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
              # 如果存在 change_master_to.sql.in,就意味着需要做集群初始化工作
              if [[ -f change_master_to.sql.in ]]; then
                # 但一定要先等 MySQL 容器启动之后才能进行下一步连接 MySQL 的操作
                echo "Waiting for mysqld to be ready(accepting connections)"
                until mysql -h 127.0.0.1 -uroot -p${MYSQL_ROOT_PASSWORD} -e "SELECT 1"; do sleep 1; done
                echo "Initializing replication from clone position"
                # 将文件 change_master_to.sql.in 改个名字
                # 防止这个 Container 重启的时候,因为又找到了 change_master_to.sql.in,从而重复执行一遍初始化流程
                mv change_master_to.sql.in change_master_to.sql.orig
                # 使用 change_master_to.sql.orig 的内容,也就是前面拼装的 SQL,组成一个完整的初始化和启动 Slave 的 SQL 语句
                mysql -h 127.0.0.1 -uroot -p${MYSQL_ROOT_PASSWORD} << EOF
              $(< change_master_to.sql.orig),
                MASTER_HOST='mysql-0.mysql-headless',
                MASTER_USER='root',
                MASTER_PASSWORD='${MYSQL_ROOT_PASSWORD}',
                MASTER_CONNECT_RETRY=10;
              START SLAVE;
              EOF
              fi
              # 使用 ncat 监听 3307 端口。
              # 它的作用是,在收到传输请求的时候,直接执行 xtrabackup --backup 命令,备份 MySQL 的数据并发送给请求者
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root --password=${MYSQL_ROOT_PASSWORD}"
          volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
      volumes:
        - name: conf
          emptyDir: {}
        - name: config-map
          configMap:
            name: mysql
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations: 
          volume.beta.kubernetes.io/storage-class: nfs # 对应第2步创建的 StorageClass 的名称
      spec:
        accessModes: ["ReadWriteOnce"]
        # storageClassName: "nfs" 不能这样写,需要在 annotations 字段传入,原因未知
        resources:
          requests:
            storage: 1Gi

7、验证部署成功

kubectl get pod,svc -l app=mysql

image-20230517160933197.png

三、设置主从同步

由于是用 StatefulSet 创建,所以每个 pod 都有一个固定域名

StatefulSet 可以使用无头服务控制它的 Pod 的网络域。管理域的这个服务的格式为: $(服务名称).$(名字空间).svc.cluster.local,其中 cluster.local 是集群域。 一旦每个 Pod 创建成功,就会得到一个匹配的 DNS 子域,格式为: $(pod 名称).$(所属服务的 DNS 域名),其中所属服务由 StatefulSet 的 serviceName 域来设定。

所以 master 节点,也就是 mysql-0 的域名是 mysql-0.mysql.default.svc.cluster.local

1、进入 master 节点

[root@master ~]# kubectl exec -it mysql-0 -c mysql -n mysql-test /bin/bash
root@mysql-0:/# mysql -uroot -p

# 这里可以添加用来同步的用户,也可以直接root用户直接同步。
 
# 查看master主节点状态
mysql> show master status \G;
*************************** 1. row ***************************
             File: mysql-0-bin.000004
         Position: 1550
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

2、进入 slave 节点

[root@master ~]# kubectl exec -it mysql-1 -c mysql -n mysql-test /bin/bash
root@mysql-0:/# mysql -uroot -p

 
### 准备就绪后就开始加入主节点 ### 
 
# 设置主库连接(这里用域名连接,IP地址重启会更改导致主从不能同步)
mysql> change master to master_host='mysql-0.mysql.default.svc.cluster.local',master_user='root',master_password='cW@cwh',master_log_file='mysql-0-bin.000004',master_log_pos=0,master_port=3306;
Query OK, 0 rows affected, 2 warnings (15.15 sec)
 
# 启动从库同步
mysql> start slave;
Query OK, 0 rows affected (0.14 sec)
 
# 查看从从库状态
mysql> show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql-0.mysql-headless.mysql-test.svc.cluster.local
                  Master_User: root
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-0-bin.000004
          Read_Master_Log_Pos: 1722
               Relay_Log_File: mysql-1-relay-bin.000002
                Relay_Log_Pos: 373
        Relay_Master_Log_File: mysql-0-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
 
# 当看到从库状态2项都是yes.就表示同步成功.
# 如果同步不成功需要尝试执行以下命令,直到2项都是yes即同步成功
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (2.13 sec)
 
mysql> reset slave;
Query OK, 0 rows affected, 1 warning (13.27 sec)
 
mysql> start slave;
Query OK, 0 rows affected, 1 warning (2.87 sec)

参考

K8S部署主从Mysql集群(版本8.0) 部署方式使用StatefulSet

StatefulSet

问题

另外两台主节点无法使用Kubenets Api

image-20230516153909920.png

解决

将配置文件同步到另外两台机器

linux下的配置文件在/root/.kube/config下

rsync -av /root/.kube/config root@10.10.10.92:/root/.kube/

然后ssh到这两台机器里面修改config里面的server为本机ip

pod的DNS解析失败

进入pod中 查看/etc/reslove.conf 中nameserver和kube-dns不一致

image-20230516105248249.png

image-20230516105317169.png

原因

造成这种现象的原因,可能是重装k8s集群的时候,这台node节点上kubelet的启动参数,用的是原来集群的配置文件,也就是 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf,(k8s组件是通过systemctl来管理的,因此可以在/etc/systemd/system)这个文件是原来集群的配置文件,其中kubelet的启动参数里–cluster-dns用的是原先的ip,所以导致dns解析失败。

解决

在所有master节点上都要执行该操作!

kubelet的参数都在/var/lib/kubelet/config.yaml中

  1. 修改 vim /var/lib/kubelet/config.yaml 将 clusterDNS 修改为 kube-dns 的CLUSTER-IP

    image-20230516105857743.png

  2. 重启 kubelet

    systemctl stop kubelet.service

    systemctl daemon-reload

    systemctl start kubelet.service

参考

k8s域名解析错误:pod中/etc/reslove.conf中nameserver和kube-dns中ip不一致

Mysql 报错Authentication plugin ‘caching_sha2_password’ cannot be loaded:

原因

Mysql8.0默认加密方式是 caching_sha2_password

这个问题就是使用了caching_sha2_password加密方式却找不到某个必需的文件

解决

在 ConfigMap 中添加配置

apiVersion: v1

kind: ConfigMap

metadata:

  name: mysql

  labels:

    app: mysql

data:

  primary.cnf: |

    # Apply this config only on the primary.
    [mysqld]

    log-bin

    default_authentication_plugin=mysql_native_password # 加这个!
  replica.cnf: |

    # Apply this config only on replicas.
    [mysqld]

    super-read-only

    default_authentication_plugin=mysql_native_password # 加这个!




参考

Tips to use MySQL 8.0 on Kubernetes

© 版权声明
THE END
喜欢就支持一下吧
点赞0

Warning: mysqli_query(): (HY000/3): Error writing file '/tmp/MY1KkfqU' (Errcode: 28 - No space left on device) in /www/wwwroot/583.cn/wp-includes/class-wpdb.php on line 2345
admin的头像-五八三
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

图形验证码
取消
昵称代码图片