配置高可用kube-apiserver组件ha+keepalived
备注1:
kubernetes master 节点运行如下组件:
kube-apiserver
kube-scheduler
kube-controller-managerkube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式.
对于 kube-apiserver,可以运行多个实例(本文档是 3 实例),但对其它组件需要提供统一的访问地址,该地址需要高可用.本文档使用 keepalived 和 haproxy 实现 kubeapiserverVIP 高可用和负载均衡.
集群模式和ha+keepalived的主要区别是什么呢?ha+keepalived配置vip,实现了api唯一的访问地址和负载均衡.集群模式没有配置vip.
备注2:
keepalived 提供 kube-apiserver 对外服务的 VIP.
haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能.
运行 keepalived 和 haproxy 的节点称为 LB 节点.由于 keepalived 是一主多备运行模式,故至少两个 LB 节点.
本文档复用 master 节点的三台机器,haproxy 监听的端口(8443) 需要与 kube-apiserver的端口 6443 不同,避免冲突.
keepalived 在运行过程中周期检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发重新选主的过程,VIP 将飘移到新选出来的主节点,从而实现 VIP 的高可用.
所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过 VIP 和haproxy 监听的 8443 端口访问 kube-apiserver 服务.
1.安装Haproxy,keepalived.三个节点都安装
yum -y install keepalived haproxy
2.配置haproxy
修改配置文件
/etc/haproxy/haproxy.cfg
把原配置文件备份,三个节点都做一下这个操作
[root@k8s-node1 haproxy]# mv haproxy.cfg haproxy.cfg.bk
[root@k8s-node1 haproxy]# ls
haproxy.cfg.bk
重新生成个haproxy.cfg文件,参考见下
haproxy 在 1080 端口输出 status 信息.
haproxy 监听所有接口的 8443 端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致.
server 字段列出所有 kube-apiserver 监听的 IP 和端口.
[root@k8s-node1 haproxy]# cat haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
maxconn 3000
frontend kube-api
bind 0.0.0.0:8443
mode tcp
log global
default_backend kube-client
backend kube-client
balance source
server k8s-node1 192.168.174.128:6443 check inter 2000 fall 2
server k8s-node2 192.168.174.129:6443 check inter 2000 fall 2
server k8s-node3 192.168.174.130:6443 check inter 2000 fall 2
listen stats
mode http
bind 0.0.0.0:1080
stats enable
stats hide-version
stats uri /haproxyadmin?stats
stats realm Haproxy\ Statistics
stats auth admin:admin
stats admin if TRUE
[root@k8s-node1 haproxy]#
分发配置文件到其它节点
[root@k8s-node1 haproxy]# scp haproxy.cfg root@k8s-node2:/etc/haproxy/
haproxy.cfg 100% 1025 942.0KB/s 00:00
[root@k8s-node1 haproxy]# scp haproxy.cfg root@k8s-node3:/etc/haproxy/
haproxy.cfg 100% 1025 820.4KB/s 00:00
[root@k8s-node1 haproxy]#
启动haproxy服务
systemctl enable haproxy &&systemctl start haproxy
状态
[root@k8s-node1 haproxy]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-11-04 02:29:27 EST; 7s ago
Main PID: 3827 (haproxy-systemd)
Tasks: 3
Memory: 1.7M
CGroup: /system.slice/haproxy.service
├─3827 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
├─3828 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
└─3829 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Nov 04 02:29:27 k8s-node1 systemd[1]: Started HAProxy Load Balancer.
Nov 04 02:29:27 k8s-node1 haproxy-systemd-wrapper[3827]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/ha...d -Ds
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node1 haproxy]#
检索启动端口
[root@k8s-node1 haproxy]# netstat -tlnp |grep haproxy
tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 3829/haproxy
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 3829/haproxy
3.配置keepalived
keepalived 是一主(master)多备(backup)运行模式,故有两种类型的配置文件.
master 配置文件只有一份,backup 配置文件视节点数目而定,对于本文档而言,规划如下:
master: 192.168.174.128
backup:192.168.174.129,192.168.174.130
备份原配置文件,三个节点都操作.
[root@k8s-node1 keepalived]# pwd
/etc/keepalived
[root@k8s-node1 keepalived]# mv keepalived.conf keepalived.conf.bk
master节点配置文件,参考见下
[root@k8s-node1 keepalived]# cat keepalived.conf
global_defs {
router_id NodeA
}
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight 2
}
vrrp_instance VI_1 {
state MASTER #设置为主服务器
interface ens33 #监测网络接口
virtual_router_id 51 #主、备必须一样
priority 100 #(主、备机取不同的优先级,主机值较大,备份机值较小,值越大优先级越高)
advert_int 1 #VRRP Multicast广播周期秒数
authentication {
auth_type PASS #VRRP认证方式,主备必须一致
auth_pass 1111 #(密码)
}
virtual_ipaddress {
192.168.174.127/24 #VRRP HA虚拟地址
}
track_script {
chk_haproxy
}
}
backup节点配置文件,参考见下:
[root@k8s-node2 keepalived]# cat keepalived.conf
global_defs {
router_id NodeA
}
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight 2
}
vrrp_instance VI_1 {
state BACKUP #设置为备用服务器
interface ens33 #监测网络接口
virtual_router_id 51 #主、备必须一样
priority 90 #(主、备机取不同的优先级,主机值较大,备份机值较小,值越大优先级越高)
advert_int 1 #VRRP Multicast广播周期秒数
authentication {
auth_type PASS #VRRP认证方式,主备必须一致
auth_pass 1111 #(密码)
}
virtual_ipaddress {
192.168.174.127/24 #VRRP HA虚拟地址
}
track_script {
chk_haproxy
}
}
[root@k8s-node3 keepalived]# cat keepalived.conf
global_defs {
router_id NodeA
}
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight 2
}
vrrp_instance VI_1 {
state BACKUP #设置为备用服务器
interface ens33 #监测网络接口
virtual_router_id 51 #主、备必须一样
priority 80 #(主、备机取不同的优先级,主机值较大,备份机值较小,值越大优先级越高)
advert_int 1 #VRRP Multicast广播周期秒数
authentication {
auth_type PASS #VRRP认证方式,主备必须一致
auth_pass 1111 #(密码)
}
virtual_ipaddress {
192.168.174.127/24 #VRRP HA虚拟地址
}
track_script {
chk_haproxy
}
}
脚本文件,注意脚本名字和存放位置,必须是keepalived配置文件里指定的位置并且名字要一样.
脚本必须有x执行权限.
[root@k8s-node1 keepalived]# cat check_haproxy.sh
#!/bin/bash
A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ];then
systemctl start haproxy.service
fi
sleep 3
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
pkill keepalived
fi
把脚本文件复制到其它节点
[root@k8s-node1 keepalived]# scp check_haproxy.sh root@k8s-node2:/etc/keepalived/
check_haproxy.sh 100% 186 152.3KB/s 00:00
[root@k8s-node1 keepalived]# scp check_haproxy.sh root@k8s-node3:/etc/keepalived/
check_haproxy.sh
启动keepalived
[root@k8s-node1 keepalived]# systemctl enable keepalived && systemctl start keepalived
[root@k8s-node1 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-11-04 02:57:19 EST; 19s ago
Process: 4720 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Tasks: 3
Memory: 1.6M
CGroup: /system.slice/keepalived.service
├─4721 /usr/sbin/keepalived -D
├─4722 /usr/sbin/keepalived -D
└─4723 /usr/sbin/keepalived -D
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
vip
[root@k8s-node1 keepalived]# ip a |grep -A 3 ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:08:66:a8 brd ff:ff:ff:ff:ff:ff
inet 192.168.174.128/24 brd 192.168.174.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.174.127/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::7c9d:8cfc:d487:6a38/64 scope link noprefixroute
valid_lft forever preferred_lft forever
遇到的问题
脚本不执行.原因:
千万注意,track_script这组参数必须写在vip后面,不然脚本不会执行.