一区二区三区三上|欧美在线视频五区|国产午夜无码在线观看视频|亚洲国产裸体网站|无码成年人影视|亚洲AV亚洲AV|成人开心激情五月|欧美性爱内射视频|超碰人人干人人上|一区二区无码三区亚洲人区久久精品

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

nginx負載均衡配置介紹

馬哥Linux運維 ? 來源:馬哥Linux運維 ? 2024-11-10 13:39 ? 次閱讀

目錄

nginx負載均衡

nginx負載均衡介紹

反向代理與負載均衡

nginx負載均衡配置

Keepalived高可用nginx負載均衡器

修改Web服務(wù)器的默認主頁

開啟nginx負載均衡和反向代理

安裝Keepalived

配置Keepalived

編寫腳本監(jiān)控Keepalived和nginx的狀態(tài)

配置keepalived加入監(jiān)控腳本的配置

nginx負載均衡介紹

nginx應(yīng)用場景之一就是負載均衡。在訪問量較多的時候,可以通過負載均衡,將多個請求分攤到多臺服務(wù)器上,相當于把一臺服務(wù)器需要承擔的負載量交給多臺服務(wù)器處理,進而提高系統(tǒng)的吞吐率;另外如果其中某一臺服務(wù)器掛掉,其他服務(wù)器還可以正常提供服務(wù),以此來提高系統(tǒng)的可伸縮性與可靠性。

下圖為負載均衡示例圖,當用戶請求發(fā)送后,首先發(fā)送到負載均衡服務(wù)器,而后由負載均衡服務(wù)器根據(jù)配置規(guī)則將請求轉(zhuǎn)發(fā)到不同的web服務(wù)器上。
7f9bba76-9e97-11ef-93f3-92fbcf53809c.png

反向代理與負載均衡

nginx通常被用作后端服務(wù)器的反向代理,這樣就可以很方便的實現(xiàn)動靜分離以及負載均衡,從而大大提高服務(wù)器的處理能力。

nginx實現(xiàn)動靜分離,其實就是在反向代理的時候,如果是靜態(tài)資源,就直接從nginx發(fā)布的路徑去讀取,而不需要從后臺服務(wù)器獲取了。

但是要注意,這種情況下需要保證后端跟前端的程序保持一致,可以使用Rsync做服務(wù)端自動同步或者使用NFS、MFS分布式共享存儲。

Http Proxy模塊,功能很多,最常用的是proxy_pass和proxy_cache

如果要使用proxy_cache,需要集成第三方的ngx_cache_purge模塊,用來清除指定的URL緩存。這個集成需要在安裝nginx的時候去做,如:

./configure --add-module=../ngx_cache_purge-1.0 ......

nginx通過upstream模塊來實現(xiàn)簡單的負載均衡,upstream需要定義在http段內(nèi)

在upstream段內(nèi),定義一個服務(wù)器列表,默認的方式是輪詢,如果要確定同一個訪問者發(fā)出的請求總是由同一個后端服務(wù)器來處理,可以設(shè)置ip_hash,如:

upstream idfsoft.com {
  ip_hash;
  server 127.0.0.1:9080 weight=5;
  server 127.0.0.1:8080 weight=5;
  server 127.0.0.1:1111;
}

注意:這個方法本質(zhì)還是輪詢,而且由于客戶端的ip可能是不斷變化的,比如動態(tài)ip,代理,F(xiàn)Q等,因此ip_hash并不能完全保證同一個客戶端總是由同一個服務(wù)器來處理。

定義好upstream后,需要在server段內(nèi)添加如下內(nèi)容:

server {
  location / {
    proxy_pass http://idfsoft.com;
  }
}

nginx負載均衡配置

環(huán)境說明

系統(tǒng) IP 角色 服務(wù)
centos8 192.168.222.250 Nginx負載均衡器 nginx
centos8 192.168.222.137 Web1服務(wù)器 apache
centos8 192.168.222.138 Web2服務(wù)器 nginx

nginx負載均衡器使用源碼的方式安裝nginx,另外兩臺Web服務(wù)器使用yum的方式分別安裝nginx與apache服務(wù)

nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝

修改Web服務(wù)器的默認主頁
Web1:

[root@Web1 ~]# yum -y install httpd   //下載服務(wù)
[root@Web1 ~]# systemctl stop firewalld.service  //關(guān)閉防火墻
[root@Web1 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //編輯內(nèi)容到網(wǎng)站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

訪問:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下載服務(wù)
[root@Web2 ~]# systemctl stop firewalld.service //關(guān)閉防火墻 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //編輯內(nèi)容到網(wǎng)站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

訪問:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

開啟nginx負載均衡和反向代理

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內(nèi)添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
             proxy_pass http://webserver;
        }

[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置

測試:
在瀏覽器輸入nginx負載均衡器的IP地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png
編輯nginx負載均衡器的nginx配置文件

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {      //在http字段內(nèi)修改
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
//可以觀察到每訪問三次apache就會訪問一次nginx,意思就是配置要連續(xù)訪問3次,才會進行下一次輪查詢,當集群中有配置較低,較老的服務(wù)器可以進行使用,來減輕這些服務(wù)器的壓力。
[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {    //http字段里面進行修改
     ip_hash; 
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
//可以看見訪問到的全部是nginx,因為ip_hash配置,這條配置可以讓客戶端訪問到服務(wù)器端,以后就一直是此服務(wù)器來進行響應(yīng)客戶端,所以才會一直訪問到nginx,當然前面已經(jīng)說過,這個方式的本質(zhì)還是輪詢,并不能保證一個客戶端總是由同一個服務(wù)器來進行響應(yīng)

Keepalived高可用nginx負載均衡器

實驗環(huán)境

系統(tǒng) 角色 服務(wù) IP
centos8 nginx負載均衡器,master nginx,keepalived 192.168.222.250
centos8 nginx負載均衡器,backup nginx,keepalived 192.168.222.139
centos8 Web1服務(wù)器 apache 192.168.222.137
centos8 Web2服務(wù)器 nginx 192.168.222.138

nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝
VIP為:192.168.222.133

修改Web服務(wù)器的默認主頁

Web1:

[root@Web1 ~]# yum -y install httpd   //下載服務(wù)
[root@Web1 ~]# systemctl stop firewalld.service  //關(guān)閉防火墻
[root@Web1 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //編輯內(nèi)容到網(wǎng)站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

訪問:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下載服務(wù)
[root@Web2 ~]# systemctl stop firewalld.service //關(guān)閉防火墻 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //編輯內(nèi)容到網(wǎng)站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

訪問:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

開啟nginx負載均衡和反向代理

Keepalived高可用的主節(jié)點的nginx是需要設(shè)置開機自啟的
master:

[root@master ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 21:27:54 CST; 1h 1min ago
  Process: 46768 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 46769 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─46769 nginx: master process /usr/local/nginx/sbin/nginx
           └─46770 nginx: worker process

Oct 18 21:27:54 nginx systemd[1]: Starting nginx server daemon...
Oct 18 21:27:54 nginx systemd[1]: Started nginx server daemon.
[root@master ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內(nèi)添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }

[root@master ~]# systemctl reload nginx.service 
//重新加載配置

測試:
在瀏覽器輸入nginx負載均衡器的IP地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png

backup:
Keepalived高可用的備用節(jié)點的nginx是不設(shè)置開機自啟的,如果開啟的話,后面訪問VIP的時候可能會訪問不到,可以在需要測試的時候進行開啟

[root@backup ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 22:25:31 CST; 1s ago
  Process: 73641 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 73642 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.7M
   CGroup: /system.slice/nginx.service
           ├─73642 nginx: master process /usr/local/nginx/sbin/nginx
           └─73643 nginx: worker process

Oct 18 22:25:31 backup systemd[1]: Starting nginx server daemon...
Oct 18 22:25:31 backup systemd[1]: Started nginx server daemon.
[root@backup ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內(nèi)添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }
[root@backup ~]# systemctl reload nginx.service 
//重新加載一下配置

訪問:
在瀏覽器輸入nginx負載均衡器的IP地址
801e6200-9e97-11ef-93f3-92fbcf53809c.png
803e00a6-9e97-11ef-93f3-92fbcf53809c.png

安裝Keepalived

master:

[root@master ~]# dnf list all |grep keepalived  //查找系統(tǒng)中是否存在其安裝包
Failed to set locale, defaulting to C.UTF-8
keepalived.x86_64                                      2.1.5-6.el8                                            AppStream 
[root@master ~]# dnf -y install keepalived

backup:

[root@backup ~]# dnf list all |grep keepalived //查找系統(tǒng)中是否存在其安裝包
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
[root@backup ~]# dnf -y install keepalived

配置Keepalived

master

[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf{,-bak}  //備份一下配置文件
[root@master keepalived]# ls
keepalived.conf-bak
[root@master keepalived]# vim keepalived.conf  //編輯一個新配置文件
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {        //這里主備節(jié)點需要一致
    state BACKUP
    interface ens33      //網(wǎng)卡
    virtual_router_id 51
    priority 100     //這里比備節(jié)點的高
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {  //主節(jié)點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {   //備節(jié)點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

backup:

[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,-bak} //備份一下配置文件
[root@backup keepalived]# ls
keepalived.conf-bak
[root@backup keepalived]# vim keepalived.conf //編輯新的配置文件
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02    
}

vrrp_instance VI_1 {       //這里主備節(jié)點需要一致
    state BACKUP
    interface ens33      //網(wǎng)卡
    virtual_router_id 51
    priority 90     //這里比主節(jié)點的小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {   //主節(jié)點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.137 80 {   //備節(jié)點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@backup keepalived]# systemctl start nginx
//此時測試的時候可以開啟nginx

查看VIP
master:

[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever

backup:

[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

//VIP在master主機上面因為在Keepalived配置文件里我們設(shè)置master的優(yōu)先級要比backup高一些,所以VIP在這里很正常

訪問:
805253ee-9e97-11ef-93f3-92fbcf53809c.png
80751082-9e97-11ef-93f3-92fbcf53809c.png

master:

[root@master keepalived]# curl 192.168.222.133
apache
[root@master keepalived]# curl 192.168.222.133
nginx

此是關(guān)閉master上面的nginx和keepalived的

[root@master keepalived]# systemctl stop nginx.service 
[root@master keepalived]# systemctl stop keepalived.service 
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
//此時master上面沒有VIP

backup:

[root@backup keepalived]# systemctl enable --now keepalived
[root@backup keepalived]# systemctl start nginx.service 
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
//此時backup上面出現(xiàn)VIP,備節(jié)點變成了主節(jié)點

[root@backup keepalived]# curl 192.168.222.133
apache
[root@backup keepalived]# curl 192.168.222.133
nginx

訪問:
8088ef80-9e97-11ef-93f3-92fbcf53809c.png
80ac123a-9e97-11ef-93f3-92fbcf53809c.png

可以看到,其中一個nginx負載均衡器掛掉了,也不會影響正常訪問,這就是nginx負載均衡的高可用的配置

重啟master上面的nginx和keepalived

[root@master keepalived]# systemctl enable --now keepalived
[root@master keepalived]# systemctl enable --now nginx
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
//可以發(fā)現(xiàn)VIP出現(xiàn)在master節(jié)點上面

編寫腳本監(jiān)控Keepalived和nginx的狀態(tài)

master:

[root@master keepalived]# cd
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
    if [ $nginx_status -lt 1 ];then
            systemctl stop keepalived
    fi
[root@master scripts]# chmod +x check_nginx.sh 
[root@master scripts]# ll
total 4
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
[root@master scripts]# vim notify.sh
[root@master scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

[root@master scripts]# chmod +x notify.sh 
[root@master scripts]# ll
total 8
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
-rwxr-xr-x. 1 root root 399 Oct 19 00:35 notify.sh

backup:
可以先提前創(chuàng)建好存放腳本的目錄

[root@backup keepalived]# cd
[root@backup ~]# mkdir  /scripts
[root@backup ~]# cd /scripts/

從主節(jié)點上面將腳本到備節(jié)點提前創(chuàng)建好的存放目錄里面

[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
notify.sh                                                          100%  399   216.0KB/s   00:00    
[root@backup scripts]# ls
notify.sh
[root@backup scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

配置keepalived加入監(jiān)控腳本的配置

master:

[root@master scripts]# cd
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}
  
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{                                //添加
    script "/scripts/check_nginx.sh"                    //添加
    interval 1                                          //添加
    weight -20                                          //添加
}                                                       //添加
 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
     track_script {                     //添加
        nginx_check                     //添加
    }                                   //添加
    notify_master "/scripts/notify.sh master"  //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service 

backup:
backup無需檢測nginx是否正常,當升級為MASTER時啟動nginx,當降級為BACKUP時關(guān)閉

[root@backup scripts]# cd
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb02
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master"           //添加
    notify_backup "/scripts/notify.sh backup"           //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service 

測試
正常狀態(tài)運行查看狀態(tài)

[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
[root@master]# curl 192.168.222.133
apache
[root@master]# curl 192.168.222.133
nginx
//此時VIP在主節(jié)點上面

關(guān)閉master的nginx

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
//沒有VIP

backup:

[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# curl 192.168.222.133
apache
[root@backup ~]# curl 192.168.222.133
nginx
//備節(jié)點變成主機節(jié)點

重新開啟master的nginx

[root@master ~]# systemctl restart keepalived.service 
[root@master ~]# systemctl restart nginx.service 
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
[root@master]# curl 192.168.222.133
apache
[root@master]# curl 192.168.222.133
nginx
//此時VIP重新回到master上面

審核編輯:彭菁

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學習之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 監(jiān)控
    +關(guān)注

    關(guān)注

    6

    文章

    2284

    瀏覽量

    55855
  • 服務(wù)器
    +關(guān)注

    關(guān)注

    12

    文章

    9596

    瀏覽量

    86968
  • Nginx負載均衡
    +關(guān)注

    關(guān)注

    0

    文章

    2

    瀏覽量

    1722

原文標題:Keepalived高可用nginx負載均衡器

文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏

    評論

    相關(guān)推薦

    路由器負載均衡怎么配置

    路由器負載均衡是一種重要的網(wǎng)絡(luò)技術(shù),它能夠?qū)⒍鄠€網(wǎng)絡(luò)連接的流量分配到多個路由器上,以提高網(wǎng)絡(luò)的性能和穩(wěn)定性。本文將詳細介紹路由器負載均衡
    的頭像 發(fā)表于 12-13 11:17 ?4075次閱讀

    高性能負載均衡Tomcat集群的實現(xiàn)

    Nginx+Tomcat搭建高性能負載均衡集群
    發(fā)表于 08-21 14:31

    使用nginx實現(xiàn)tomcat負載均衡

    Nginx+tomcat+memcached實現(xiàn)負載均衡及session(交叉存儲)
    發(fā)表于 08-28 08:52

    nginx實現(xiàn)的負載均衡

    nginx實現(xiàn)負載均衡
    發(fā)表于 05-04 13:42

    16nginx+keepalived +zuul如何實現(xiàn)高可用及負載均衡

    學習筆記微服務(wù)-16 nginx+keepalived +zuul 實現(xiàn)高可用及負載均衡
    發(fā)表于 05-22 10:16

    Nginx和Tomcat負載均衡實現(xiàn)session共享

    Nginx和Tomcat負載均衡實現(xiàn)session共享
    發(fā)表于 09-05 10:40 ?9次下載
    <b class='flag-5'>Nginx</b>和Tomcat<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>實現(xiàn)session共享

    構(gòu)建實戰(zhàn):Nginx+IIS構(gòu)筑Web服務(wù)器集群負載均衡

    構(gòu)建實戰(zhàn):Nginx+IIS構(gòu)筑Web服務(wù)器集群負載均衡
    發(fā)表于 09-05 10:56 ?4次下載
    構(gòu)建實戰(zhàn):<b class='flag-5'>Nginx</b>+IIS構(gòu)筑Web服務(wù)器集群<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>

    f5負載均衡Nginx負載均衡有什么區(qū)別

    負載均衡是分攤到多個操作單元上進行執(zhí)行,建立在現(xiàn)有網(wǎng)絡(luò)結(jié)構(gòu)之上,提供了一種廉價有效透明的方法擴展網(wǎng)絡(luò)設(shè)備和服務(wù)器的帶寬、增加吞吐量、加強網(wǎng)絡(luò)數(shù)據(jù)處理能力、提高網(wǎng)絡(luò)的靈活性和可用性。市場上有很多的負載
    發(fā)表于 01-01 18:41 ?9195次閱讀
    f5<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>和<b class='flag-5'>Nginx</b><b class='flag-5'>負載</b><b class='flag-5'>均衡</b>有什么區(qū)別

    超詳細!使用 LVS 實現(xiàn)負載均衡原理及安裝配置詳解

    負載均衡集群是 load balance 集群的簡寫,翻譯成中文就是負載均衡集群。常用的負載均衡
    發(fā)表于 01-21 14:01 ?1372次閱讀

    詳解Nginx負載均衡配置誤區(qū)

    之前有很多朋友問關(guān)于Nginx的upstream模塊中max_fails及fail_timeout,這兩個指令,分別是配置關(guān)于負載均衡過程中,對于上游(后端)服務(wù)器的失敗嘗試次數(shù)和不可
    的頭像 發(fā)表于 05-13 14:36 ?1879次閱讀
    詳解<b class='flag-5'>Nginx</b><b class='flag-5'>負載</b><b class='flag-5'>均衡</b><b class='flag-5'>配置</b>誤區(qū)

    解密負載均衡技術(shù)和負載均衡算法

    承諾的 SLA),選擇正確的負載均衡算法會對應(yīng)用程序性能產(chǎn)生重大影響。 本文將會介紹常見的負載均衡算法,并結(jié)合主流
    的頭像 發(fā)表于 11-12 09:16 ?1354次閱讀

    聊聊Nginx作為負載均衡器它支持的算法都有哪些?

    Nginx作為一款最流行WEB服務(wù)器軟件,同時也是一款反向代理和負載均衡軟件。毫不夸張地說,Nginx負載
    的頭像 發(fā)表于 02-14 17:50 ?899次閱讀

    如何使用Nginx作為應(yīng)用程序的負載均衡器?

    Nginx因其高性能和可擴展性而廣受歡迎。它是排名第一的開源Web 服務(wù)器。在本教程中,我們將學習如何使用Nginx作為應(yīng)用程序的負載均衡器? 要將
    的頭像 發(fā)表于 03-23 14:52 ?1259次閱讀

    搭建Keepalived+Lvs+Nginx高可用集群負載均衡

    ? 一、Nginx安裝 二、配置反向代理 三、配置負載均衡 四、upstream指令參數(shù) 五、配置
    的頭像 發(fā)表于 06-25 15:39 ?3450次閱讀
    搭建Keepalived+Lvs+<b class='flag-5'>Nginx</b>高可用集群<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>

    零基礎(chǔ)也可以搞懂負載均衡怎么配置

    負載均衡怎么配置?在Linux中配置負載均衡器的步驟涉及多個環(huán)節(jié),包括選擇
    的頭像 發(fā)表于 10-12 15:58 ?483次閱讀