Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
On-Demand Recording: https://www.nginx.com/resources/webinars/high-performance-load-balancing/
Join this webinar to learn:
* How to configure basic HTTP load balancing features
* The essential elements of load balancing: session persistence, health checks, and SSL termination
* How to load balance MySQL, DNS, and other common TCP/UDP applications
* How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
About the webinar
You’ve built a great application and it’s gaining in popularity. Or maybe you already have a hardware load balancer and you’re looking to replace it with a software solution. In this webinar we’ll share the latest information on how to scale-out and load balance your applications with NGINX and NGINX Plus.
2. MORE INFORMATION AT NGINX.COM
Who Are We?
Floyd Smith
Director, Content Marketing, NGINX
Formerly:
• Sr. Technical Writer, Apple
• Group Channel Manager, Altavista
• Author of best-selling technology books
Faisal Memon
Product Marketer,NGINX
Formerly:
• Sr. Technical Marketing Engineer,
Riverbed
• Technical Marketing Engineer, Cisco
• Software Engineer, Cisco
3. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
3
4. 4
50%of the top 100,000
busiest websites
Source: W3Techs Web Technology Survey
>
5. Where NGINX Plus fits
5
Internet
Web Server
Serve content from disk
Application Gateway
FastCGI, uWSGI, Passenger…
Reverse Proxy
Caching, load balancing…
HTTP traffic
6. NGINX and Load Balancing
6
• Survey says: Large companies interested in global load balancing,
smaller ones in CDNs and public cloud
• Load balancing is a hot topic on our website
• Gartner report shows NGINX as an ADC/load balancing leader
• Ebook on 5 Reasons to Switch to Load Balancing
• Load balancing training
• NGINX Professional Services – architectural experts
…and much more; contact Sales for free evaluation
7. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
7
8. NGINX Plus works in all environments
8
Public/Private/Hybrid CloudBare Metal Containers
9. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
9
10. MORE INFORMATION AT NGINX.COM
Load Balancing Overview
Pool
Virtual
Server
Why a load balancer?
• Availability
• Performance
• Security
• Control
11. MORE INFORMATION AT NGINX.COM
Basic Load Balancing Configuration
upstream my_upstream {
server server1.example.com;
server server2.example.com;
}
server {
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://my_upstream;
}
}
• upstream defines the pool
• server defines the virtual server
• listen defines the IPand port the virtual
server listens to. Default is to bind to port 80
on all IPs on the system.
• proxy_pass tells the virtual server what pool
to use
• proxy_set_header passes through original
client Host header. Default is to rewrite Host
header to name and port of proxied server.
• location defines what URI enclosed
configuration applies to
12. MORE INFORMATION AT NGINX.COM
Key Files and Directories
• /etc/nginx/ -- Where all NGINX configuration is stored
• /etc/nginx/nginx.conf -- Top-level NGINX configuration, should not
require much modification
• /etc/nginx/conf.d/*.conf -- Where your configuration for virtual servers
and upstreams goes, i.e. www.example.com.conf
13. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
13
14. MORE INFORMATION AT NGINX.COM
Multiplexing Multiple Sites on One IP
server {
listen 80 default_server;
server_name www.example.com;
# ...
}
server {
listen 80;
server_name www.example2.com;
# ...
}
server {
listen 80;
server_name www.example3.com;
# ...
}
• server_name defines the hostname the
virtual server is responsible for.
• default_server defines the virtual server
to use if Host header is empty.
15. MORE INFORMATION AT NGINX.COM
Layer 7 Request Routing
server {
# ...
location /service1 {
proxy_pass http://upstream1;
}
location /service2 {
proxy_pass http://upstream2;
}
location /service3 {
proxy_pass http://upstream3;
}
}
• location blocks are used to do Layer 7
routing based on URL
16. MORE INFORMATION AT NGINX.COM
d
Active Health Checks
upstream my_upstream {
zone my_upstream 64k;
server server1.example.com slow_start=30s;
}
server {
# ...
location /health {
internal;
health_check interval=5s uri=/test.php
match=statusok;
proxy_set_header HOST www.example.com;
proxy_pass http://my_upstream;
}
match statusok {
# Used for /test.php health check
status 200;
header Content-Type = text/html;
body ~ "Server[0-9]+ is alive";
}
• Polls /test.php every 5 seconds
• If response is not 200, server marked
as failed
• If response body does not contain
“ServerN is alive”, server marked as
failed
• Recovered/new servers will slowly
ramp up traffic over 30 seconds
• Exclusive to NGINX Plus
17. MORE INFORMATION AT NGINX.COM
Hash Session Persistence
upstream my_upstream {
server server1.example.com;
server server2.example.com;
hash $binary_remote_addr consistent;
}
• Always pins same client IP address to same
server
Potential pitfalls:
• Uneven distribution - Could be 100s of
users sharing one IP address behind NAT
• Mobile – If client changes IP, session is lost
18. MORE INFORMATION AT NGINX.COM
Sticky Cookie Session Persistence
upstream my_upstream {
server server1.example.com;
server server2.example.com;
sticky cookie name expires=1h
domain=.example.com path=/;
}
• NGINX will insert a cookie using the specified
name
• expires defines how long the cookie is valid for.
The default is for the cookie to expire at the end of
the browser session.
• domain specifies the domain the cookie is valid
for. If not specified, domain field of cookie is left
blank
• path specifies the path the cookie is set for. If not
specified, path field of cookie is left blank
• Exclusive to NGINX Plus
19. MORE INFORMATION AT NGINX.COM
Basic SSL Configuration
server {
listen 80 default_server;
server_name www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
server_name www.example.com;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
# ...
}
• return 301 will force all traffic to
SSL, which is good for SEO.
• ssl parameter needs to be added to
listen directive
• ssl_certificate specifies where
the public certificate is located.
• ssl_certificate_key specifies
where the private key is located.
20. MORE INFORMATION AT NGINX.COM
Using SSL to Upstream Servers
upstream my_upstream {
server server1.example.com;
server server2.example.com;
}
server {
listen 443 ssl;
# ...
location / {
proxy_set_header Host $host;
proxy_pass https://my_upstream;
}
}
• Use https instead of http in the
proxy_pass directive
21. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
21
22. MORE INFORMATION AT NGINX.COM
Modifications to main nginx.conf
user nginx;
worker_processes auto;
# ...
http {
# ...
keepalive_timeout 300s;
keepalive_requests 100000;
}
• Set in main nginx.conf file
• Default value for worker_processes varies on
system and installation source
• auto means to create one worker process per core.
This is recommended for most deployments.
• keepalive_timeout controls how long to keep idle
connections to clients open. Default: 75s
• keeplive_requests Max requests on a single client
connection before its closed
• keepalive_* can also be set per virtual server
23. MORE INFORMATION AT NGINX.COM
HTTP/1.1 Keepalive to Upstreams
upstream my_upstream {
server server1.example.com;
keepalive 32;
}
server {
location / {
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://my_upstream;
}
}
• keepalive enables TCP connection cache
• By default NGINX uses HTTP/1.0 with
Connection: Close
• proxy_http_version upgrades connection
to HTTP/1.1
• proxy_set_header enables keepalive by
clearing Connection: Close HTTP header
24. MORE INFORMATION AT NGINX.COM
Dual-stack RSA/ECC SSL Configuration
server {
listen 443 ssl default_server;
server_name www.example.com;
ssl_certificate cert_rsa.crt;
ssl_certificate_key cert_rsa.key;
ssl_certificate cert_ecdsa.crt;
ssl_certificate_key cert_ecdsa.key;
# ...
}
• Specify two sets of
ssl_certificate and
ssl_certificate_key directives
• NGINX will serve ECDSA cert for
clients that support it and RSA for
those that don’t
• Most modern browsers and OS’s
support ECC
• ECC has 2-3x better performance
than RSA based on our testing.
25. MORE INFORMATION AT NGINX.COM
SSL Session Caching and HTTP/2
server {
listen 443 ssl http2 default_server;
server_name www.example.com;
ssl_certificate cert.crt
ssl_certificate_key cert.key
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
}
• Improves SSL/TLS performance
• 1 MB session cache can store about 4,000
sessions
• Cache shared across all NGINX workers
• HTTP/2 improves performance
• Note: HTTP/2 requires OpenSSL 1.0.2 or
greater to work properly
26. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
26
27. MORE INFORMATION AT NGINX.COM
Basic TCP/UDP Load Balancing Configuration
stream {
upstream my_upstream {
server server1.example.com:1234;
server server2.example.com:2345;
}
server {
listen 1123 [udp];
proxy_pass my_upstream;
}
}
• All TCP/UDP load balancing configuration
goes within the stream block.
• The port number is mandatory when
configuring upstream TCP/UDP servers.
• udp parameter to listen directive enables
UDP load balancing.
Note: stream configuration cannot go into the
conf.d/ folder. It is recommended to put a
stream block in the main nginx.conf file and use
the include directive to include a separate folder
such as stream.conf.d/ which contains virtual
server and pool configuration.
28. MORE INFORMATION AT NGINX.COM
MySQL Load Balancing Configuration
stream {
upstream galera {
server db1.example.com:3306;
server db2.example.com:3306 backup;
server db3.example.com:3306 down;
}
server {
listen 3306;
proxy_pass galera;
proxy_connect_timeout 1s;
}
}
• TCP load balancing across a MySQL
Galera cluster.
• Uses a single master for all reads/writes
• Having multiple write masters can lead to
collisions and potentially erroneous data
29. MORE INFORMATION AT NGINX.COM
MySQL Load Balancing with Split Read/Writes
stream {
upstream galera_write {
server db1.example.com:3306;
server db2.example.com:3306 backup;
server db3.example.com:3306 down;
}
upstream galera_read {
server db2.example.com:3306;
server db3.example.com:3306;
}
server {
listen 3308;
proxy_pass galera_read;
proxy_connect_timeout 1s;
}
server {
listen 3309;
proxy_pass galera_write;
proxy_connect_timeout 1s;
}
}
• Use separate virtual servers with different
ports to separate reads and writes.
• All writes go to designated master, with
other servers as backups
• Reads load balanced across other servers
• Requires application code to be more
database aware
30. MORE INFORMATION AT NGINX.COM
DNS Load Balancing Configuration
stream {
upstream dns_servers {
server 192.168.136.130:53;
server 192.168.136.131:53;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
proxy_responses 1;
proxy_timeout 1s;
error_log /var/log/nginx/dns.log info;
}
}
• Multiple listen directives as DNS uses
TCP for responses greater than 512
bytes
• proxy_responses let’s NGINX know
there will only be a single response from
the upstream server
• error_log instructs NGINX to output
proxy events into the specified log file.
There is no access log for TCP/UDP
traffic as NGINX does not inspect the
payload.
31. MORE INFORMATION AT NGINX.COM
TCP/UDP Health Checks
stream {
server {
listen 12345;
proxy_pass tcp;
health_check;
}
server {
listen 53 udp;
proxy_pass dns_upstream;
health_check udp;
}
}
• For TCP applications NGINX establishes a
TCP connection every 5 seconds. Standard
“TCP Connect” health check.
• For UDP it will send “nginx health check”
every 5 seconds and expect the absence of
ICMP “Destination Unreachable” in the
response.
• Exclusive to NGINX Plus
1. Improving performance1. Improving performance
32. 1. Introducing NGINX
2. Load Balancing History
3. Basic NGINX HTTP load balancing
4. Essentials: health checks, persistence,
SSL termination, etc.
5. Improving performance
6. TCP/UDP load balancing
7. DNS service discovery
Agenda
32
33. MORE INFORMATION AT NGINX.COM
Service Discovery with Consul
• Special “registrator” container
which for other containers
starting or stopping
• When container goes up/down,
registrator updates service
registry
• NGINX Plus polls service
registry DNS interface to get
updated listed of container
IP/port
34. MORE INFORMATION AT NGINX.COM
DNS Service Discovery with Consul
resolver consul:53 valid=10s;
upstream service1 {
zone service1 64k;
server service1.service.consul service=http
resolve;
}
• NGINX Plus will look up consul
in /etc/hosts/ file if using links or
using Docker embedded DNS
server.
• By default Consul uses this
format for services:
[tag.]<service>.service[.
datacenter].<domain>
• Exclusive to NGINX Plus
35. MORE INFORMATION AT NGINX.COM
DNS Service Discovery with Docker
resolver 127.0.0.11 valid=10s;
upstream service1 {
zone service1 64k;
server service1 service=http resolve;
}
• resolver is the IP Address of the
DNS server. For the Docker
embedded DNS server this is
always 127.0.0.11.
• The optional valid parameter
overrides the DNS TTL value.
• service=http tell NGINX Plus to
look for DNS SRV records which
contain port number.
• The resolve parameter tell
NGINX Plus to re-resolve this
hostname
• Exclusive to NGINX Plus
36. MORE INFORMATION AT NGINX.COM
Summary
• The server directive defines a virtual server
• The upstream directive defines the pool
• The proxy_pass directive links the virtual server to the pool
• location blocks are used to do layer 7 request routing based on URL
• Multiple ssl_certificate and ssl_certificate_key directives can be used to do dual-
stack RSA/ECC
• The stream directive is used for TCP/UDP load balancing
• NGINX Plus can integrate with the DNS interface of Docker and Consul
37. Q & ATry NGINX Plus free for 30 days: nginx.com/free-trial-
request
38. nginx.conf 2017
Sep 6-8, 2017 | Portland,
OR
nginx.com/nginxconf
Sign up now,
get 50% off
Use code: WEBINAR
Learn from industry veterans at the world’s top companies
Special guest: Jimmy Yang from HBO’s Silicon Valley
Architect The Future
Editor's Notes
Half of the top 10,000
We’re now the number one web server for the top 100,000 as well, and climbing fast in every category.
NGINX Plus gives you all the tools you need to deliver your application reliably.
Web Server
NGINX is a fully featured web server that can directly serve static content. NGINX Plus can scale to handle hundreds of thousands of clients simultaneously, and serve hundreds of thousands of content resources per second.
Application Gateway
NGINX handles all HTTP traffic, and forwards requests in a smooth, controlled manner to PHP, Ruby, Java, and other application types, using FastCGI, uWSGI, and Linux sockets.
Reverse Proxy
NGINX is a reverse proxy that you can put in front of your applications. NGINX can cache both static and dynamic content to improve overall performance, as well as load balance traffic enabling you to scale-out.
Being software NGINX Plus can operate in any environment, from bare metal to VMs to containers.
We don’t need to QA and qualify every environment. If you can run Linux you can run NGINX and it will just work.
Not just across infrastructure, but the same NGINX software that runs in production can also run in staging and development environments without incurring additional capital costs.
Keeping the different environments in sync as much as possible is an industry best practice and helps to reduce issues where it worked in dev but broke in production.
With NGINX Plus enterprises can easily eliminate this potential gap in the deployment process.
Load balancing is a application architecture where there are multiple copies of the application with a load balancing device in front that spreads traffic amongst the server.
The application being load balanced can be turnkey apps like Microsoft exchange, custom web apps, or even databases.
Two main benefits to load balancing:
Scale out to handle more load than a single server could on its own
Redundancy to handle error conditions
There are two main concepts for load balancing, the virtual server and the pool.
The pool is the set of servers that are being load balanced.
The virtual server is the front of the load balancer and hosts the ip and port clients connect to.
- We recommend configuration to be put into conf.d directory, not sites-enabled or sites-available
Load balancing is a application architecture where there are multiple copies of the application with a load balancing device in front that spreads traffic amongst the server.
The application being load balanced can be turnkey apps like Microsoft exchange, custom web apps, or even databases.
Two main benefits to load balancing:
Scale out to handle more load than a single server could on its own
Redundancy to handle error conditions
There are two main concepts for load balancing, the virtual server and the pool.
The pool is the set of servers that are being load balanced.
The virtual server is the front of the load balancer and hosts the ip and port clients connect to.