When the number of site users increases and site traffic increases, generally one instance of the program can not meet all requests and we have to run several instances of the program on multiple servers. However, we need a web server that distributes the sent requests between the available instances . By doing this, the number of requests is divided among the instances . To do this we can use nginx . nginx has load balancing capability . To implement load balancing in nginx, you must create an upstream group in the nginx.conf file in the http context .
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
}
In the above configuration we have created a group called backend which consists of three servers. This means that three instances of the program are running and we can send the requested requests to these servers. Then we need to create a location block in the server context (this context is also inside the http context ) . After the word location, you must write the path to which the requests are sent, which is written in the example below of the same root of the website ( / ). Write / after location includes all requests. You can also filter only the requests that are sent to the APIs , which you can do instead / The api / use. This will enter all requests whose url starts with / api into the location block .
Then we have to use proxy_pass to send requests between running programs . After the word proxy_pass we have to enter the upstream name that we wrote above. Doing so will distribute all requests sent to / between backend servers .
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
}
The nginx web server uses the Round robin algorithm by default to distribute requests across servers. Of course we can change this algorithm. For example, we can specify how many requests to send to each server.
By adding weight next to the server names, I specify that for example, if 7 requests are sent to the server, 5 will be sent to backend1 and 2 to backend2 . We can also specify that a server be a backup server , such as backend4 .
If a server is not available, you can prevent the request from being sent by adding the word down after the server name; Like backend3 or put a # sign before the word server .
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com weight=2;
server backend3.example.com down;
server backend4.example.com backup;
}
By doing this, the backend4 server is introduced as the backup server and no request will be sent to the backend4 server until the rest of the servers are available . If all servers backend1 and backend2 out of reach, then nginx requests sent to the server backup the backend4 sends. Of course, by default nginx does not check the health of the servers and we have to register a series of settings related to health_check . By adding the health_check command at the bottom of the proxy_pass nginx web serverIt sends a request to the backend servers every 5 seconds , and if each server sends a code response outside of codes 200 to 399, it does not send subsequent requests to those servers.
server {
location / {
proxy_pass http://backend;
health_check;
}
}
In addition to the health_check command, we can also specify a series of parameters. For example, we can ascertain that the request health_check to the uri or to send or not send the request to the port.
server {
location / {
proxy_pass http://backend;
health_check port=8080;
#health_check uri=/healthcheck;
}
}
(# Is used to comment on a command).
We can also create a custom request to specify health_check :
http {
#...
match welcome {
status 200;
header Content-Type = text/html;
body ~ "Welcome to nginx!";
}
server {
#...
location / {
proxy_pass http://backend;
health_check match=welcome;
}
}
}
In the above configuration, we have specified that in order to indicate the availability of each of the servers, the status code 200 must be returned and the received body must contain! Welcome to nginx.
Note : match and health_check can be used in the commercial version of nginx .
;)
Powered by Froala Editor