一区二区三区三上|欧美在线视频五区|国产午夜无码在线观看视频|亚洲国产裸体网站|无码成年人影视|亚洲AV亚洲AV|成人开心激情五月|欧美性爱内射视频|超碰人人干人人上|一区二区无码三区亚洲人区久久精品

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

確保網(wǎng)站無縫運行:Keepalived高可用與Nginx集成實戰(zhàn)

馬哥Linux運維 ? 來源:馬哥Linux運維 ? 2024-11-27 09:08 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

目錄

keepalived高可用(nginx)

keepalived簡介

keepalived的重要功能

keepalived高可用架構(gòu)圖

keepalived工作原理描述

keepalived實現(xiàn)nginx負載均衡機高可用

腦裂

腦裂產(chǎn)生的原因

腦裂的常見解決方案

對腦裂進行監(jiān)控

keepalived簡介

keepalived官網(wǎng)
Keepalived 軟件起初是專為LVS負載均衡軟件設(shè)計的,用來管理并監(jiān)控LVS集群系統(tǒng)中各個服務(wù)節(jié)點的狀態(tài),后來又加入了可以實現(xiàn)高可用的VRRP功能。因此,Keepalived除了能夠管理LVS軟件外,還可以作為其他服務(wù)(例如:Nginx、Haproxy、MySQL等)的高可用解決方案軟件。

Keepalived軟件主要是通過VRRP協(xié)議實現(xiàn)高可用功能的。VRRP是Virtual Router RedundancyProtocol(虛擬路由器冗余協(xié)議)的縮寫,VRRP出現(xiàn)的目的就是為了解決靜態(tài)路由單點故障問題的,它能夠保證當個別節(jié)點宕機時,整個網(wǎng)絡(luò)可以不間斷地運行。

所以,Keepalived 一方面具有配置管理LVS的功能,同時還具有對LVS下面節(jié)點進行健康檢查的功能,另一方面也可實現(xiàn)系統(tǒng)網(wǎng)絡(luò)服務(wù)的高可用功能。

keepalived的重要功能

eepalived 有三個重要的功能,分別是:

管理LVS負載均衡軟件
實現(xiàn)LVS集群節(jié)點的健康檢查
作為系統(tǒng)網(wǎng)絡(luò)服務(wù)的高可用性(failover)

keepalived高可用架構(gòu)圖

9ab6b53e-a3fd-11ef-93f3-92fbcf53809c.png

keepalived工作原理描述

Keepalived高可用對之間是通過VRRP通信的,因此,我們從 VRRP開始了解起:

VRRP,全稱 Virtual Router Redundancy Protocol,中文名為虛擬路由冗余協(xié)議,VRRP的出現(xiàn)是為了解決靜態(tài)路由的單點故障。

VRRP是通過一種竟選協(xié)議機制來將路由任務(wù)交給某臺 VRRP路由器的。

VRRP用 IP多播的方式(默認多播地址(224.0_0.18))實現(xiàn)高可用對之間通信。

工作時主節(jié)點發(fā)包,備節(jié)點接包,當備節(jié)點接收不到主節(jié)點發(fā)的數(shù)據(jù)包的時候,就啟動接管程序接管主節(jié)點的開源。備節(jié)點可以有多個,通過優(yōu)先級競選,但一般 Keepalived系統(tǒng)運維工作中都是一對。

VRRP使用了加密協(xié)議加密數(shù)據(jù),但Keepalived官方目前還是推薦用明文的方式配置認證類型和密碼。

介紹完 VRRP,接下來我再介紹一下 Keepalived服務(wù)的工作原理:

Keepalived高可用是通過 VRRP 進行通信的, VRRP是通過競選機制來確定主備的,主的優(yōu)先級高于備,因此,工作時主會優(yōu)先獲得所有的資源,備節(jié)點處于等待狀態(tài),當主掛了的時候,備節(jié)點就會接管主節(jié)點的資源,然后頂替主節(jié)點對外提供服務(wù)。

在 Keepalived 服務(wù)之間,只有作為主的服務(wù)器會一直發(fā)送 VRRP 廣播包,告訴備它還活著,此時備不會槍占主,當主不可用時,即備監(jiān)聽不到主發(fā)送的廣播包時,就會啟動相關(guān)服務(wù)接管資源,保證業(yè)務(wù)的連續(xù)性.接管速度最快可以小于1秒。

keepalived實現(xiàn)nginx負載均衡機高可用

環(huán)境說明:

系統(tǒng)信息 主機名 IP
centos 8.5 master 192.168.222.138
centos 8.5 backup 192.168.222.139

本次高可用虛擬IP(VIP)地址暫定為192.168.222.133
keepalived安裝
阿里云官網(wǎng)
配置主keepalived

關(guān)閉防火墻:
[root@master ~]# systemctl stop firewalld.service 
[root@master ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@master ~]# setenforce 0
[root@master ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
配置網(wǎng)絡(luò)源:
[root@master ~]# dnf -y install wget
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安裝epel源:
[root@master yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@master yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@master yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@master yum.repos.d]# ls
CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo
epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo
查找keepalived:
[root@master yum.repos.d]# cd
[root@master ~]# dnf list all |grep keepalived
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
安裝keepalived:
[root@master ~]# dnf -y install keepalived
查看配置文件:
[root@master ~]# ls /etc/keepalived/
keepalived.conf
查看安裝生成的文件:
[root@master ~]# rpm -ql keepalived 
/etc/keepalived     //配置目錄
/etc/keepalived/keepalived.conf   //此為主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/.build-id
/usr/lib/.build-id/0a
/usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e
/usr/lib/.build-id/6f
/usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f
/usr/lib/systemd/system/keepalived.service  //此為服務(wù)控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
/usr/share/doc/keepalived
/usr/share/doc/keepalived/AUTHOR
/usr/share/doc/keepalived/CONTRIBUTORS
/usr/share/doc/keepalived/COPYING
/usr/share/doc/keepalived/ChangeLog
/usr/share/doc/keepalived/README
/usr/share/doc/keepalived/TODO
/usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived/keepalived.conf.IPv6
/usr/share/doc/keepalived/keepalived.conf.PING_CHECK
/usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived/keepalived.conf.SSL_GET
/usr/share/doc/keepalived/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived/keepalived.conf.UDP_CHECK
/usr/share/doc/keepalived/keepalived.conf.conditional_conf
/usr/share/doc/keepalived/keepalived.conf.fwmark
/usr/share/doc/keepalived/keepalived.conf.inhibit
/usr/share/doc/keepalived/keepalived.conf.misc_check
/usr/share/doc/keepalived/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived/keepalived.conf.quorum
/usr/share/doc/keepalived/keepalived.conf.sample
/usr/share/doc/keepalived/keepalived.conf.status_code
/usr/share/doc/keepalived/keepalived.conf.track_interface
/usr/share/doc/keepalived/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived/keepalived.conf.virtualhost
/usr/share/doc/keepalived/keepalived.conf.vrrp
/usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck
/usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived/keepalived.conf.vrrp.sync
/usr/share/man/man1/genhash.1.gz
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/man/man8/keepalived.8.gz
/usr/share/snmp/mibs/KEEPALIVED-MIB.txt
/usr/share/snmp/mibs/VRRP-MIB.txt
/usr/share/snmp/mibs/VRRPv3-MIB.txt


用同樣的方法在備服務(wù)器上安裝keepalived

關(guān)閉防火墻:
[root@backup ~]# systemctl stop firewalld.service 
[root@backup ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@backup ~]# setenforce 0
[root@backup ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
配置網(wǎng)絡(luò)源:
[root@backup ~]# dnf -y install wget
[root@backup ~]# cd /etc/yum.repos.d/
[root@backup yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@backup yum.repos.d]#sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安裝epel源
[root@backup yum.repos.d]#dnf install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
[root@backup yum.repos.d]#sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
[root@backup yum.repos.d]#sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
[root@backup yum.repos.d]# ls
CentOS-Base.repo   epel-next-testing.repo  epel-playground.repo       epel-testing.repo
epel-modular.repo  epel-next.repo          epel-testing-modular.repo  epel.repo
查找keepalived:
[root@backup yum.repos.d]# cd
[root@backup ~]# dnf list all |grep keepalived
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
安裝keepalived:
[root@backup ~]# dnf -y install keepalived
查看配置文件:
[root@backup ~]# ls /etc/keepalived/
keepalived.conf
查看安裝生成的文件:
[root@backup ~]# rpm -ql keepalived 
/etc/keepalived     //配置目錄
/etc/keepalived/keepalived.conf   //此為主配置文件
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/lib/.build-id
/usr/lib/.build-id/0a
/usr/lib/.build-id/0a/410997e11c666114ca6d785e58ff0cc248744e
/usr/lib/.build-id/6f
/usr/lib/.build-id/6f/ba0d6bad6cb5ff7b074e703849ed93bebf4a0f
/usr/lib/systemd/system/keepalived.service  //此為服務(wù)控制文件
/usr/libexec/keepalived
/usr/sbin/keepalived
/usr/share/doc/keepalived
/usr/share/doc/keepalived/AUTHOR
/usr/share/doc/keepalived/CONTRIBUTORS
/usr/share/doc/keepalived/COPYING
/usr/share/doc/keepalived/ChangeLog
/usr/share/doc/keepalived/README
/usr/share/doc/keepalived/TODO
/usr/share/doc/keepalived/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived/keepalived.conf.IPv6
/usr/share/doc/keepalived/keepalived.conf.PING_CHECK
/usr/share/doc/keepalived/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived/keepalived.conf.SSL_GET
/usr/share/doc/keepalived/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived/keepalived.conf.UDP_CHECK
/usr/share/doc/keepalived/keepalived.conf.conditional_conf
/usr/share/doc/keepalived/keepalived.conf.fwmark
/usr/share/doc/keepalived/keepalived.conf.inhibit
/usr/share/doc/keepalived/keepalived.conf.misc_check
/usr/share/doc/keepalived/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived/keepalived.conf.quorum
/usr/share/doc/keepalived/keepalived.conf.sample
/usr/share/doc/keepalived/keepalived.conf.status_code
/usr/share/doc/keepalived/keepalived.conf.track_interface
/usr/share/doc/keepalived/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived/keepalived.conf.virtualhost
/usr/share/doc/keepalived/keepalived.conf.vrrp
/usr/share/doc/keepalived/keepalived.conf.vrrp.localcheck
/usr/share/doc/keepalived/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived/keepalived.conf.vrrp.sync
/usr/share/man/man1/genhash.1.gz
/usr/share/man/man5/keepalived.conf.5.gz
/usr/share/man/man8/keepalived.8.gz
/usr/share/snmp/mibs/KEEPALIVED-MIB.txt
/usr/share/snmp/mibs/VRRP-MIB.txt
/usr/share/snmp/mibs/VRRPv3-MIB.txt

在主備機上分別安裝nginx
在master上安裝nginx

[root@master ~]# dnf -y install nginx
[root@master ~]# cd /usr/share/nginx/html/
[root@master html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@master html]# echo 'master' > index.html
[root@master html]# systemctl start nginx
[root@master html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master html]# systemctl enable nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
//在主節(jié)點這邊需要設(shè)置開機自啟

在backup上安裝nginx

[root@backup ~]# dnf -y install nginx
[root@backup ~]# cd /usr/share/nginx/html/
[root@backup html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@backup html]# echo 'backup' > index.html
root@backup html]# systemctl start nginx
[root@backup html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
//在備節(jié)點這邊不需要設(shè)置開機自啟

在瀏覽器上訪問試試,確保master上的nginx服務(wù)能夠正常訪問

9ade0a6c-a3fd-11ef-93f3-92fbcf53809c.png

9aef8d46-a3fd-11ef-93f3-92fbcf53809c.png

keepalived配置
配置主keepalived

[root@master html]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf{,-bak}
[root@master keepalived]# ls
keepalived.conf-bak                 //備份一下配置文件
[root@master keepalived]# dnf -y install vim
[root@master keepalived]# vim keepalived.conf  //編輯一個新配置文件
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {        //這里主備節(jié)點需要一致
    state BACKUP
    interface ens33      //網(wǎng)卡
    virtual_router_id 51
    priority 100     //這里比備節(jié)點的高
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master keepalived]# ls
keepalived.conf  keepalived.conf-bak
[root@master keepalived]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 002983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
//此時備節(jié)點的keepalived還沒有啟動
[root@master keepalived]# scp keepalived.conf 192.168.222.139:/etc/keepalived
The authenticity of host '192.168.222.139 (192.168.222.139)' can't be established.
ECDSA key fingerprint is SHA256:anVVbTlEIzA1E8rB7IbLzaf7t9oQjB0qFP6Dd/ijnJI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.222.139' (ECDSA) to the list of known hosts.
root@192.168.222.139's password: 
keepalived.conf                                                    100%  875   905.2KB/s   00:00    
//將創(chuàng)建的這個配置文件傳到備節(jié)點上去,因為主,備節(jié)點的這個配置文件基本上都是一樣的只需要改一點點

配置備keepalived

[root@backup html]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,-bak}
[root@backup keepalived]# ls
keepalived.conf-bak               //備份一下配置文件
[root@backup keepalived]# dnf -y install vim
[root@backup keepalived]# ls     //接收到主節(jié)點傳過來的配置文件
keepalived.conf  keepalived.conf-bak
[root@backup keepalived]# vim keepalived.conf    //進行修改一下
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02    
}

vrrp_instance VI_1 {       //這里主備節(jié)點需要一致
    state BACKUP
    interface ens33      //網(wǎng)卡
    virtual_router_id 51
    priority 90     //這里比主節(jié)點的小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup keepalived]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

查看VIP在哪里
在MASTER上查看

[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
//主節(jié)點上面有vip

在BACKUP上查看

[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
//備節(jié)點上面沒有vip

測試
停掉master的keepalived服務(wù),開啟backup的niginx和keepalived服務(wù)然后查看主權(quán)情況
master

[root@master keepalived]# systemctl stop keepalived.service 

backup:

[root@backup keepalived]# systemctl start nginx.service
[root@backup keepalived]# systemctl start keepalived.service
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

9b04785a-a3fd-11ef-93f3-92fbcf53809c.png


//此時可以看見backup是主
然后再開啟master的keepalived服務(wù)再查看主權(quán)情況
master

[root@master keepalived]# systemctl start keepalived.service 
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff

backup

[root@backup keepalived]# systemctl stop nginx.service 
//此時測試的時候backup上面的nginx是要進行關(guān)閉的
[root@backup keepalived]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

9b1497b2-a3fd-11ef-93f3-92fbcf53809c.png


//此時可以看見master還是主
讓keepalived監(jiān)控nginx負載均衡機
keepalived通過腳本來監(jiān)控nginx負載均衡機的狀態(tài)
在master上編寫腳本

[root@master keepalived]# cd
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
if [ $nginx_status -lt 1 ];then
    systemctl stop keepalived
fi
[root@master scripts]# chmod +x check_nginx.sh 
[root@master scripts]# ll
total 4
-rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh
[root@master scripts]# vim notify.sh
[root@master scripts]# cat notify.sh
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
        echo "Usage:$0 master|backup VIP"
    ;;
esac
[root@master scripts]# chmod +x notify.sh 
[root@master scripts]# ll
total 8
-rwxr-xr-x. 1 root root 142 Oct  8 23:21 check_nginx.sh
-rwxr-xr-x. 1 root root 383 Oct  8 23:31 notify.sh
[root@master scripts]# scp check_nginx.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
check_nginx.sh                                                     100%  142   113.6KB/s   00:00    
[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
notify.sh                                                          100%  383   244.7KB/s   00:00    
//將這個腳本傳給備節(jié)點上提前創(chuàng)建好的目錄里面

在backup上編寫腳本

[root@backup keepalived]# cd
[root@backup ~]# mkdir /scripts
[root@backup ~]# cd /scripts/
[root@backup scripts]# ll
total 8
-rwxr-xr-x. 1 root root 142 Oct  8 23:39 check_nginx.sh
-rwxr-xr-x. 1 root root 383 Oct  8 23:36 notify.sh

配置keepalived加入監(jiān)控腳本的配置
配置主keepalived

[root@master scripts]# cd
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_script nginx_check {   //添加這一部分
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33      
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    track_script {    //添加這一部分
        nginx_check
    }
    notify_master "/scripts/notify.sh master 192.168.222.133"   
    notify_backup "/scripts/notify.sh backup 192.168.222.133"
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service
[root@master ~]# systemctl restart nginx.service

配置備keepalived
backup無需檢測nginx是否正常,當升級為MASTER時啟動nginx,當降級為BACKUP時關(guān)閉

[root@backup scripts]# cd
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33      
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master 192.168.222.133" //添加
    notify_backup "/scripts/notify.sh backup 192.168.222.133" //添加
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.138 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service 
[root@backup ~]# systemctl restart nginx.service 

測試
正常狀態(tài)運行查看狀態(tài)

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
[root@master ~]# curl 192.168.222.133
master
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
停止nginx
[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

master上停止nginx后的狀態(tài)

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# curl 192.168.222.133
backup

腦裂

在高可用(HA)系統(tǒng)中,當聯(lián)系2個節(jié)點的“心跳線”斷開時,本來為一整體、動作協(xié)調(diào)的HA系統(tǒng),就分裂成為2個獨立的個體。由于相互失去了聯(lián)系,都以為是對方出了故障。兩個節(jié)點上的HA軟件像“裂腦人”一樣,爭搶“共享資源”、爭起“應(yīng)用服務(wù)”,就會發(fā)生嚴重后果——或者共享資源被瓜分、2邊“服務(wù)”都起不來了;或者2邊“服務(wù)”都起來了,但同時讀寫“共享存儲”,導(dǎo)致數(shù)據(jù)損壞(常見如數(shù)據(jù)庫輪詢著的聯(lián)機日志出錯)。

對付HA系統(tǒng)“裂腦”的對策,目前達成共識的的大概有以下幾條:

添加冗余的心跳線,例如:雙線條線(心跳線也HA),盡量減少“裂腦”發(fā)生幾率;
啟用磁盤鎖。正在服務(wù)一方鎖住共享磁盤,“裂腦”發(fā)生時,讓對方完全“搶不走”共享磁盤資源。但使用鎖磁盤也會有一個不小的問題,如果占用共享盤的一方不主動“解鎖”,另一方就永遠得不到共享磁盤。現(xiàn)實中假如服務(wù)節(jié)點突然死機或崩潰,就不可能執(zhí)行解鎖命令。后備節(jié)點也就接管不了共享資源和應(yīng)用服務(wù)。于是有人在HA中設(shè)計了“智能”鎖。即:正在服務(wù)的一方只在發(fā)現(xiàn)心跳線全部斷開(察覺不到對端)時才啟用磁盤鎖。平時就不上鎖了。
設(shè)置仲裁機制。例如設(shè)置參考IP(如網(wǎng)關(guān)IP),當心跳線完全斷開時,2個節(jié)點都各自ping一下參考IP,不通則表明斷點就出在本端。不僅“心跳”、還兼對外“服務(wù)”的本端網(wǎng)絡(luò)鏈路斷了,即使啟動(或繼續(xù))應(yīng)用服務(wù)也沒有用了,那就主動放棄競爭,讓能夠ping通參考IP的一端去起服務(wù)。更保險一些,ping不通參考IP的一方干脆就自我重啟,以徹底釋放有可能還占用著的那些共享資源

腦裂產(chǎn)生的原因

一般來說,腦裂的發(fā)生,有以下幾種原因:

高可用服務(wù)器對之間心跳線鏈路發(fā)生故障,導(dǎo)致無法正常通信
因心跳線壞了(包括斷了,老化)
因網(wǎng)卡及相關(guān)驅(qū)動壞了,ip配置及沖突問題(網(wǎng)卡直連)
因心跳線間連接的設(shè)備故障(網(wǎng)卡及交換機
因仲裁的機器出問題(采用仲裁的方案)

高可用服務(wù)器上開啟了 iptables防火墻阻擋了心跳消息傳輸

高可用服務(wù)器上心跳網(wǎng)卡地址等信息配置不正確,導(dǎo)致發(fā)送心跳失敗

其他服務(wù)配置不當?shù)仍?,如心跳方式不同,心跳廣插沖突、軟件Bug等

注意:
Keepalived配置里同一 VRRP實例如果 virtual_router_id兩端參數(shù)配置不一致也會導(dǎo)致裂腦問題發(fā)生。

腦裂的常見解決方案

在實際生產(chǎn)環(huán)境中,我們可以從以下幾個方面來防止裂腦問題的發(fā)生:

同時使用串行電纜和以太網(wǎng)電纜連接,同時用兩條心跳線路,這樣一條線路壞了,另一個還是好的,依然能傳送心跳消息
當檢測到裂腦時強行關(guān)閉一個心跳節(jié)點(這個功能需特殊設(shè)備支持,如Stonith、feyce)。相當于備節(jié)點接收不到心跳消患,通過單獨的線路發(fā)送關(guān)機命令關(guān)閉主節(jié)點的電源
做好對裂腦的監(jiān)控報警(如郵件及手機短信等或值班).在問題發(fā)生時人為第一時間介入仲裁,降低損失。例如,百度的監(jiān)控報警短信就有上行和下行的區(qū)別。報警消息發(fā)送到管理員手機上,管理員可以通過手機回復(fù)對應(yīng)數(shù)字或簡單的字符串操作返回給服務(wù)器.讓服務(wù)器根據(jù)指令自動處理相應(yīng)故障,這樣解決故障的時間更短.

當然,在實施高可用方案時,要根據(jù)業(yè)務(wù)實際需求確定是否能容忍這樣的損失。對于一般的網(wǎng)站常規(guī)業(yè)務(wù).這個損失是可容忍的

對腦裂進行監(jiān)控

對腦裂的監(jiān)控應(yīng)在備用服務(wù)器上進行,通過添加zabbix自定義監(jiān)控進行。
監(jiān)控什么信息呢?監(jiān)控備上有無VIP地址

備機上出現(xiàn)VIP有兩種情況:

發(fā)生了腦裂
正常的主備切換
監(jiān)控只是監(jiān)控發(fā)生腦裂的可能性,不能保證一定是發(fā)生了腦裂,因為正常的主備切換VIP也是會到備上的。

監(jiān)控腳本如下:

[root@backup ~]# mkdir -p /scripts && cd /scripts
[root@backup scripts]# vim check_keepalived.sh
#!/bin/bash

if [ `ip a show ens33 |grep 192.168.222.133|wc -l` -ne 0 ]
then
    echo "keepalived is error!"
else
    echo "keepalived is OK !"
fi

編寫腳本時要注意,網(wǎng)卡要改成你自己的網(wǎng)卡名稱,VIP也要改成你自己的VIP,最后不要忘了給腳本賦予執(zhí)行權(quán)限,且要修改/scripts目錄的屬主屬組為zabbix

環(huán)境

主機 安裝的服務(wù) ip
master keepalived,nginx 192.168.222.138
backup keepalived,nginx,zabbix客戶端 192.168.222.139
zabbix zabbix服務(wù)端 192.168.222.250

VIP:192.168.222.133
zabbix的安裝部署以及一些操作可以看我的博客監(jiān)控服務(wù)zabbix部署,zabbix的基礎(chǔ)使用,
zabbix監(jiān)控詳解里面有zabbix安裝的詳細操作

在backup主機安裝zabbix的客戶端,在192.168.222.250主機安裝zabbix服務(wù)端用于使用web網(wǎng)頁管理監(jiān)控

監(jiān)控出現(xiàn)異常的兩種狀態(tài):

正常情況下master主機nginx和keepalived為啟動狀態(tài),backup主機keepalived為開啟,nginx為關(guān)閉
當master主機發(fā)生異常時backup主機通過腳本搶奪vip
當出現(xiàn)腦裂時主備的兩臺主機都會有vip,虛擬IP
編寫監(jiān)控腳本
在backup主機或者zabbix客戶端編寫腳本

[root@backup ~]# cd /scripts/
[root@backup scripts]# ls
check_nginx.sh  notify.sh
[root@backup scripts]# vim check_keepalived.sh 
[root@backup scripts]# cat check_keepalived.sh 
#!/bin/bash

if [ `ip a show ens33 |grep 192.168.222.133|wc -l` -ne 0 ]
then
    echo "1"   //有問題
else 
    echo "0"   //沒問題
fi
[root@backup scripts]# chmod +x check_keepalived.sh 
[root@backup scripts]# ls
check_keepalived.sh  check_nginx.sh  notify.sh
[root@backup scripts]# chown -R zabbix.zabbix /scripts/
[root@backup scripts]# ll
total 12
-rwxr-xr-x. 1 zabbix zabbix 148 Oct  9 21:05 check_keepalived.sh
-rwxr-xr-x. 1 zabbix zabbix 142 Oct  8 23:39 check_nginx.sh
-rwxr-xr-x. 1 zabbix zabbix 383 Oct  8 23:36 notify.sh
[root@backup scripts]# systemctl stop nginx.service 
[root@backup scripts]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
[root@backup scripts]# ./check_keepalived.sh 
0  
//測試腳本
//正常情況下master主機nginx和keepalived為啟動狀態(tài),backup主機keepalived為開啟,nginx為關(guān)閉
[root@backup scripts]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0029:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

修改backup的zabbix配置文件

[root@backup scripts]# cd
[root@backup ~]# cd /usr/local/etc/
[root@backup etc]# pwd
/usr/local/etc
[root@backup etc]# vim zabbix_agentd.conf
UserParameter=check_keepalived,/bin/bash /scripts/check_keepalived.sh       //修改
UnsafeUserParameters=1       //修改
[root@backup ~]# pkill zabbix_agentd 
[root@backup ~]# zabbix_agentd 
//重啟zabbix

zabbix服務(wù)端測試

[root@zabbix ~]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:80                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10051                0.0.0.0:*                    
LISTEN     0          128                127.0.0.1:9000                 0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
LISTEN     0          70                         *:33060                      *:*                    
LISTEN     0          128                        *:3306                       *:*                   
[root@zabbix ~]#  zabbix_get -s 192.168.222.139 -k check_keepalived
0

查看master狀態(tài)

[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff

創(chuàng)建監(jiān)控主機

9b2904a4-a3fd-11ef-93f3-92fbcf53809c.png

9b334522-a3fd-11ef-93f3-92fbcf53809c.png

9b4b9762-a3fd-11ef-93f3-92fbcf53809c.png

9b56984c-a3fd-11ef-93f3-92fbcf53809c.png

9b61de14-a3fd-11ef-93f3-92fbcf53809c.png

9b7cd016-a3fd-11ef-93f3-92fbcf53809c.png

9b9682e0-a3fd-11ef-93f3-92fbcf53809c.png

9ba383dc-a3fd-11ef-93f3-92fbcf53809c.png

9bbc6c94-a3fd-11ef-93f3-92fbcf53809c.png


添加監(jiān)控項

9beeeb6a-a3fd-11ef-93f3-92fbcf53809c.png

9c15c6ea-a3fd-11ef-93f3-92fbcf53809c.png

9c21524e-a3fd-11ef-93f3-92fbcf53809c.png


查看數(shù)據(jù)

9c37fbfc-a3fd-11ef-93f3-92fbcf53809c.png

9c531630-a3fd-11ef-93f3-92fbcf53809c.png


添加觸發(fā)器

9c644bd0-a3fd-11ef-93f3-92fbcf53809c.png

9c74ef9e-a3fd-11ef-93f3-92fbcf53809c.png

9c8d75aa-a3fd-11ef-93f3-92fbcf53809c.png


測試
在master上面停止nginx開啟keepalived,backup上面開啟nginx,keepalived
模擬故障轉(zhuǎn)移
master

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# systemctl restart keepalived.service 

backeup

[root@backup ~]# systemctl start nginx
[root@backup ~]# systemctl restart keepalived.service 

查看狀態(tài)

master:
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
backup:
[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

查看告警觸發(fā)

9c97beca-a3fd-11ef-93f3-92fbcf53809c.png


重新啟動master上面的nginx,keepalived

root@master ~]# systemctl restart nginx.service 
[root@master ~]# systemctl restart keepalived.service 
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:db:51:2f brd ff:ff:ff:ff:ff:ff

//此時沒有報警信息

9ca85fc8-a3fd-11ef-93f3-92fbcf53809c.png


模擬腦裂
更改master主機keepalived配置文件,將virtual_router_id進行更改,與backup里面不一樣就可以
master

[root@master ~]# vim /etc/keepalived/keepalived.conf
virtual_router_id 55    //我這里是將51改為了55
[root@master ~]# systemctl restart keepalived.service 
//重啟keepalived
[root@master ~]# ip a    //發(fā)現(xiàn)VIP還在
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.138/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:0051:2f brd ffffff:ff
[root@master ~]# ss -antl    //nginx也在
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                  

backup

[root@backup ~]# ip a   //發(fā)現(xiàn)也有VIP
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# ss -antl   //nginx也在
State      Recv-Q     Send-Q         Local Address:Port            Peer Address:Port     Process     
LISTEN     0          128                  0.0.0.0:22                   0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:10050                0.0.0.0:*                    
LISTEN     0          128                  0.0.0.0:80                   0.0.0.0:*                    
LISTEN     0          128                     [::]:22                      [::]:*                    
LISTEN     0          128                     [::]:80                      [::]:*                   

出現(xiàn)了報警信息

9cb488c0-a3fd-11ef-93f3-92fbcf53809c.png

鏈接:https://www.cnblogs.com/tushanbu/p/16770767.html

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • VRRP
    +關(guān)注

    關(guān)注

    0

    文章

    11

    瀏覽量

    5880
  • nginx
    +關(guān)注

    關(guān)注

    0

    文章

    168

    瀏覽量

    12537
  • Keepalived
    +關(guān)注

    關(guān)注

    0

    文章

    8

    瀏覽量

    4149

原文標題:確保網(wǎng)站無縫運行:Keepalived 高可用與Nginx 集成實戰(zhàn)

文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評論

    相關(guān)推薦
    熱點推薦

    Nginx配置終極指南

    Nginx 是開源、高性能、可靠的 Web 和反向代理服務(wù)器,而且支持熱部署,幾乎可以做到 7 * 24 小時不間斷運行,即使運行幾個月也不需要重新啟動,還能在不間斷服務(wù)的情況下對軟
    的頭像 發(fā)表于 06-18 15:56 ?221次閱讀
    <b class='flag-5'>Nginx</b>配置終極指南

    Nginx性能優(yōu)化終極指南

    而worker 進程數(shù)默認為 1 。單進程最大連接數(shù)為1024。如下圖(打開Nginx目錄下的/conf/nginx.conf 文檔),現(xiàn)在我們來對這兩個數(shù)值進行調(diào)優(yōu)
    的頭像 發(fā)表于 06-16 13:44 ?170次閱讀
    <b class='flag-5'>Nginx</b>性能優(yōu)化終極指南

    介紹三種常見的MySQL可用方案

    在生產(chǎn)環(huán)境中,為了確保數(shù)據(jù)庫系統(tǒng)的連續(xù)可用性、降低故障恢復(fù)時間以及實現(xiàn)業(yè)務(wù)的無縫切換,可用(High Availability, HA)方
    的頭像 發(fā)表于 05-28 17:16 ?265次閱讀

    Matter無縫集成

    等功能。 nRF52840 內(nèi)建一個帶浮點運算單元 (FPU) 的 Arm? Cortex?-M4 MCU,運行頻率為 64 MHz,并集成了一個 Arm CryptoCell-310 加密加速器,為
    發(fā)表于 05-19 15:48

    Nginx核心功能深度解析

    Nginx核心功能深度解析
    的頭像 發(fā)表于 05-09 10:50 ?235次閱讀

    Nginx實戰(zhàn)全攻略

    Nginx 是一個高性能的 HTTP 和反向代理服務(wù)器,也是一個 IMAP/POP3 代理服務(wù)器。它以其并發(fā)處理能力、穩(wěn)定性、豐富的功能模塊和低內(nèi)存消耗而聞名。
    的頭像 發(fā)表于 03-31 10:44 ?3974次閱讀
    <b class='flag-5'>Nginx</b><b class='flag-5'>實戰(zhàn)</b>全攻略

    使用DRBD和keepalived實現(xiàn)文件實時同步和雙機熱備

    使用DRBD和keepalived實現(xiàn)文件實時同步和雙機熱備
    的頭像 發(fā)表于 03-03 17:20 ?371次閱讀

    Keepalived詳解

    的地址轉(zhuǎn)移到可用LVS節(jié)點實現(xiàn)。所以keepalive的可用是屬于具有很強針對性的可用,它和corosync這種通用性HA方案不同。
    的頭像 發(fā)表于 02-19 10:20 ?629次閱讀
    <b class='flag-5'>Keepalived</b>詳解

    華為 FlexusX 與 Docker+Nginx 的高效整合之路

    前言 華為 FlexusX 攜手 Docker+Nginx,高效整合,云端性能再升級!FlexusX 服務(wù)器,依托華為強大的技術(shù)實力,為 Docker 容器與 Nginx 服務(wù)器提供了完美的運行環(huán)境
    的頭像 發(fā)表于 01-23 17:55 ?312次閱讀
    華為 FlexusX 與 Docker+<b class='flag-5'>Nginx</b> 的高效整合之路

    EulerOS+Nginx+MySQL 部署 GLPI 資產(chǎn)管理系統(tǒng)

    1. 部署環(huán)境說明 ??本次環(huán)境選擇使用華為云 Flexus 云服務(wù)器 X 實例,因為其具有高性能的計算能力、靈活的資源配置、穩(wěn)定的運行環(huán)境、高效的網(wǎng)絡(luò)訪問速度、服務(wù)的可用性保證以及多層次的數(shù)據(jù)
    的頭像 發(fā)表于 01-03 09:28 ?663次閱讀
    EulerOS+<b class='flag-5'>Nginx</b>+MySQL 部署 GLPI 資產(chǎn)管理系統(tǒng)

    SD-WAN與云計算的無縫集成

    可以更靈活地利用多種類型的連接,包括公共互聯(lián)網(wǎng)、專線和無線網(wǎng)絡(luò)。 SD-WAN與云計算的無縫集成 簡化云應(yīng)用的訪問: SD-WAN通過優(yōu)化網(wǎng)絡(luò)路徑,確保云應(yīng)用的高效訪問。這使得企業(yè)可以更快速地訪問和使用云資源,而不必擔(dān)心網(wǎng)絡(luò)延遲
    的頭像 發(fā)表于 12-30 10:21 ?397次閱讀

    Nginx日常運維方法Linux版

    / 默認站點目錄:/usr/share/nginx/html 通過篩選進程查看當前使用的主配置文件和運行用戶: ? ps aux | grep nginx ? 如圖: 主要配置文件:
    的頭像 發(fā)表于 12-06 16:38 ?441次閱讀
    <b class='flag-5'>Nginx</b>日常運維方法Linux版

    nginx負載均衡配置介紹

    目錄 nginx負載均衡 nginx負載均衡介紹 反向代理與負載均衡 nginx負載均衡配置 Keepalived
    的頭像 發(fā)表于 11-10 13:39 ?689次閱讀
    <b class='flag-5'>nginx</b>負載均衡配置介紹

    nginx重啟命令linux步驟是什么?

      1、驗證nginx配置文件是否正確   方法一:進入nginx安裝目錄sbin下,輸入命令./nginx -t   看到如下顯示nginx.conf syntax is ok
    發(fā)表于 07-11 17:13

    nginx重啟命令linux步驟是什么?

      1、驗證nginx配置文件是否正確   方法一:進入nginx安裝目錄sbin下,輸入命令./nginx -t   看到如下顯示nginx.conf syntax is ok
    發(fā)表于 07-10 16:40