Presentations by Liam Crilly, Owen Garrett and Ed English NGINX at ‘Architecting for now & the future with NGINX’ Lunch and Learn in the Shangri-La Hotel, At The Shard, London. Presentations provide tips and insight into how NGINX can help to maximize performance and flexibility of cloud environments through laying the foundational building blocks for cloud-based microservices applications, API Management & Service Mesh initiatives.
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Architecting for now & the future with NGINX London April 19
1. An Introduction to the NGINX
Application Platform
Ed English
16th April 2019
2. “... when I started NGINX,
I focused on a very specific
problem – how to handle more
customers per a single server.”
- Igor Sysoev, NGINX creator and founder
Where It All Began
3. MORE INFORMATION AT NGINX.COM
High Performance Webserver
and Reverse Proxy
Web Server
In 2002 …
4. 350million
Total sites running
on NGINX
66.7%
Top 10,000
most visited websites
58%
of all instances on
Amazon Web Services
1Billion+
The most pulled image
on DockerHub
78%
of all sites using http2
1Million+
Pulls of NGINX
K8S Ingress Controller
16 years later…
6. Infrastructure Shifts
Closer to Apps
Infrastructure
& Ops teams
Hardware,
scale-up
One infrastructure
for every app
Application &
DevOps teams
Software,
scale-out
Every app gets
multiple infrastructures
6
7. Legacy doesn’t go away
Hardware doesn’t adapt to new apps, cloud
Open source doesn’t accommodate standardization
Tools Sprawl Adds
Complexity
7
8. A Lightweight Approach
Combats Complexity
PaaS, ESB, &
HW LBs
Containers,
Kubernetes
Cloud-only
Inflexible
Production ready?
Not a silver bullet
8
9. Modernization Success Is An Evolution
9
App Type
Legacy Modern
App
Architecture
Simple
Complex Monolithic Hybrid services Microservices
↑ Agility
“Reusable”
E/W performance
↓ Costs
“Software-defined”
N/S performance
↑ Scale
“Refactored”
API, K8s traffic
ERP, CRM?
Mobile App?
Digital Services?
1. SW Load balancer
2. API gateway
3. Service mesh
12. 12
Dynamic Application Gateway
Dynamic App Gateway
• A single, clustered ingress/
egress tier in front of apps.
• Optimizes north/south traffic
delivery for apps, APIs.
• Combines load balancing,
proxying, SSL, caching, WAF,
and API management.
Web App Firewall
Today: Dynamic Application Gateway
13. 13
Dynamic
Application
Infrastructure
Dynamic App Infrastructure
• A single app platform for
monoliths, microservices.
• Optimizes east/west app
traffic and app serving.
• Combines web server, app
servers, KIC, and service
mesh.
Future: Dynamic Application Infrastructure
Web App Firewall
14. NGINX
Application
Platform
The industry’s only
solution that drives
10x simplification
and 80% cost savings
by combining load
balancers, API
gateway, and service
mesh into a single,
modular platform
Load balancer API gateway Service Mesh
15. Embraces A Multitude Of Use Cases
Reverse
Proxy
Load
Balancer
WAF Cache
API
Gateway
Ingress
Controller
Sidecar
Proxy
Web
Server
App
Server
20. Open Source-Driven
375M websites powered worldwide
66% of the 10,000 busiest sites
90M downloads per year
Enterprise-Driven
25,000 customers worldwide
49 of the Fortune 50
10 of the world’s top 10 brands
NGINX + F5: Complementary Approaches
25. 80% CAPEX and OPEX savings
Consolidation: 10 solutions to 1
Software on commodity hardware
Free up budget for new projects
Fund innovation, not status quo
RETURN ON
INVESTMENT
26. Moving to the next generation of F5 hardware
was going to cost more than $1M per data
center. NGINX Plus gave us 50% more
transactions per server, for one-sixth the
price. We’re now 100% hardware free.
“
-- Senior Networking Leader, AppNexus
“
RETURN ON
INVESTMENT
27. Goal: Improve performance, reduce costs, and go
“hardware free” to improve agility
NGINX Plus performs all load balancing; runs on Dell
hardware with 50% more transactions, 83% less cost
Deployed by network team to replace F5 hardware
that was too expensive, too slow
RETURN ON
INVESTMENT
28. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
29. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
We have deployed TLS at a large scale
using both hardware and software load
balancers. We have found that modern
software-based TLS implementations
running on commodity CPUs are fast
enough to handle heavy HTTPS traffic
load without needing to resort to
dedicated cryptographic hardware.
- Doug Beaver, Facebook
30. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
We have deployed TLS at a large scale
using both hardware and software load
balancers. We have found that modern
software-based TLS implementations
running on commodity CPUs are fast
enough to handle heavy HTTPS traffic
load without needing to resort to
dedicated cryptographic hardware.
- Doug Beaver, Facebook
In practical deployment, we found that
enabling and prioritizing ECDHE cipher
suites caused negligible increase in CPU
usage. HTTP keepalives and session
resumption mean that most requests do
not require a full handshake, so
handshake operations do not dominate
our CPU usage.
- Jacob Hoffman-Andrews, Twitter
31. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
32. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
We have deployed TLS at a large scale
using both hardware and software load
balancers. We have found that modern
software-based TLS implementations
running on commodity CPUs are fast
enough to handle heavy HTTPS traffic
load without needing to resort to
dedicated cryptographic hardware.
- Doug Beaver, Facebook
33. Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10 KB of
memory per connection and less than
2% of network overhead. Many people
believe that SSL/TLS takes a lot of CPU
time and we hope the preceding
numbers will help to dispel that.
- Adam Langley, Google
We have deployed TLS at a large scale
using both hardware and software load
balancers. We have found that modern
software-based TLS implementations
running on commodity CPUs are fast
enough to handle heavy HTTPS traffic
load without needing to resort to
dedicated cryptographic hardware.
- Doug Beaver, Facebook
In practical deployment, we found that
enabling and prioritizing ECDHE cipher
suites caused negligible increase in CPU
usage. HTTP keepalives and session
resumption mean that most requests do
not require a full handshake, so
handshake operations do not dominate
our CPU usage.
- Jacob Hoffman-Andrews, Twitter
36. App-centric infrastructure
Software-defined, composable
Automated for DevOps, CI/CD
-- Software Development Director, Comcast
It used to take us 2 weeks to make a change
in our F5 infrastructure. With NGINX, it takes
30 seconds to load the image and 20
seconds to run the Ansible script. Tada! Like
magic it’s in production.
“ “
AGILITY
37. Goal: reduce incident impacts, maximize availability,
make changes during business hours
NGINX Plus frontends microservices for app routing,
load balancing, security; reduced errors: .35% to .025%
Deployed by an apps team as part of the customer
support app stack (18M account loads/month)
AGILITY
40. Increase adoption, reduce churn
Protect your brand and reputation
High performance app delivery
Proven reliability and scalability
Security for both legacy, modern
CUSTOMER
EXPERIENCE
41. We’re a nearly 100-year-old insurance
company with customers that expect an
experience like Google or Facebook. If we
don’t load the first-page in 3 seconds or less,
we lose that customer.
“
-- DevOps Leader, TIAA-CREF
“
CUSTOMER
EXPERIENCE
42. Goal: User response in 1s, completed transaction in
3s, 99.9% availability, 0 failed customer experiences
NGINX Plus is an app-level load balancer to improve
elasticity and span AWS & Azure
Deployed by DevOps in a dedicated digital org as
part of a top-down digital transformation initiative
CUSTOMER
EXPERIENCE
53. Problem Statement
We saw that people:
• Want to deliver their apps better
• Wanted easy configuration, with a minimal amount of
NGINX-specific learning required
• Want to save time
54. Easy Configuration at Scale
Wizard-style interface to configure LB with a
few clicks
Quickly create basic HTTP/S configurations
• L7 traffic routing based on URI
• SSL key and certificate management
• Add and remove upstream servers
• Add advanced configurations, if desired
Save time, costs and effort using push-button
deployment of configuration across multiple
instances
• Create one configuration; deploy across
multiple instances.
55. Monitor & Analyze Performance
Deep visibility and insights into KPIs (per
instance basis) using an agent:
• Visualize real team traffic and system stats
• Analyze usage & performance trends
including for 200 metrics
Advanced performance metrics:
• Rate, bandwidth errors, latency, health
checks, all per server zone/or per upstream
Transaction metrics:
• Response codes, cache, filtered by URI,
host, header, upstream
System performance metrics:
• CPU, disk, memory, load
56. Preemptive Recommendations
Use the built-in configuration analyzer to get:
Enhanced performance and security
based on learnings from thousands of
customers
Better SLAs by following built-in best
practices.
Preemptive and actionable
recommendations for:
• Configuration
• Security
• SSL status
57. Support for Multi-Cloud Environment
NGINX Controller is a Docker package
Can be deployed on any public or
private cloud
Can manage NGINX Plus instances
on multiple public and private clouds
59. Modern Apps Require a Modern Architecture
From Monolithic ... ... to Dynamic
Three-tier, J2EE-style architectures
Complex protocols (HTML, SOAP)
Persistent deployments
Fixed, static Infrastructure
Big-bang releases
Silo’ed teams (Dev, Test, Ops)
Microservices
Lightweight (REST, JSON)
Containers, VMs, Functions
Infrastructure as Code
Continuous delivery
DevOps Culture
60. In practice
• Use the “Strangler Approach” to extend your
Monolith to using Microservices:
1. Add small pieces of functionality in Microservices.
2. Repeat as needed
• Organize team structure around service
ownership.
• Adopt DevOps mentality – follow:
◦ 12-factor app for design and constraints
◦ Cloud-Native approaches to deploy and manage
Holiday Photos
62. Evolution in Action
You have New Use Cases
New Applications are
needed New Datasources and business processes
are added
How do we add the new
use cases without large-
scale rewrites?
63. Evolution in Action
Implement Hybrid/Strangler Pattern
1. Implement connector microservices to
provide API abstractions for external
dependencies
64. Evolution in Action
2. Implement business-logic microservices
for each business process
Implement Hybrid/Strangler Pattern
65. Evolution in Action
3. Implement presentation-layer
microservices that are accessed externally
Implement Hybrid/Strangler Pattern
66. Evolution in Action
4. Use NGINX Ingress Controller for
external-internal connectivity
Implement Hybrid/Strangler Pattern
67. Evolution in Action
5. Use NGINX Router Mesh (Service Mesh)
for internal connectivity
Implement Hybrid/Strangler Pattern
70. Operating a distributed application is hard
Static, Predictable Monolith: Dynamic, Distributed Application:
Fast, reliable function calls
Local debugging
Local profiling
Calendared, big-bang upgrades
‘Integration hell’ contained in dev
Slow, unreliable API calls
Distributed fault finding
Distributed tracing
In-place dynamic updates
‘Continuous integration’ live in prod
More things can go wrong, it’s harder to find the faults, everything happens live
71. What is a service mesh?
A service mesh is an invisible, autonomous, L7 routing
layer for distributed, multi-service applications. It
provides scalability, security and observability for these
applications, and enables operational use cases.
Most commonly implemented as a
‘sidecar proxy’
Implementations:
• Istio/Envoy
• Consul Connect
• Linkerd2
• NGINX/nginMesh
• … and many others
to follow
72. Why do I need a Service Mesh?
• In most cases, you do not need a service mesh
(at least, not yet)
• Your applications will go through a maturity journey:
1. Pre- or early-production applications, mature ‘mode 1’ applications
2. Single simple, business-critical production applications
3. Multiple complex, distributed applications This is where you may
need a service mesh
73. Maturity Journey – Step 1
Simple Ingress Router, Kubernetes Networking
Many production
applications start and
finish here
Rely on Kubernetes for:
• DNS-based Service Discovery
• Scaling and reconfiguration
• KubeProxy-based load balancing
• Health Checks
• Network Policies for Access Control
Use a third-party Ingress Router
• Pre- and Early-Production Applications, Established Apps
74. Maturity Journey – Step 2
Ingress Router, Per-Service Load Balancer,
Router-mesh Load Balancer
Enhance applications with:
• Prometheus metrics
• OpenTracing tracers
• mTLS or SPIFFE ssl
Use per-service proxies for specific
services
Use central router-mesh proxy load
balancer
Most production apps
running in containers
over the last ~3 years
have taken this approach
• More complex, business-critical applications
P O
T S
75. But… this approach gets expensive to
manage
The operational complexity and cost of developing bespoke libraries
across languages, frameworks, and runtimes is prohibitive for most
organizations, especially those with heterogenous applications and
polyglot programming languages.
IDC Market Perspective:
Vendors Stake Out Positions in Emerging Istio Service Mesh Landscape
76. Service Mesh Goal:
Deal with it without changing the app
The infrastructure (the “service mesh”) must alleviate these problems
without any changes made to the app:
Environmental requirements:
• Transparent to the app
• Non-Invasive – easy to add or remove
• Supports hybrid environments
• Headless or GUI
Functional requirements:
• mTLS for encryption and auth
• Observability
• Tracing
• Traffic Control
77. Maturity Journey – Step 3
Every container has an embedded proxy
Embed proxy into every container
Proxy intercepts all traffic and applies
advanced functionality
Proxy implements L7 policies
Requires a comprehensive control
plane
A service mesh provides
standard functionality and
services in an invisible,
universal fashion
• Multiple interdependent, hetrogeneous applications
78. Find the balanceCosttooperate
Complexity, Interdependencies, Speed of Change
Single simple app Many complex, interdependent apps
Using native Kubernetes
and other services
Using
service mesh
As service meshes mature,
their cost will go down
91. 83% 40%of all hits are classified as
API traffic (JSON/XML)
of NGINX deployments
are as an API gateway
Source: Akamai State of the Internet Feb-2019 Source: NGINX User survey 2017, 2018
96. API Gateway Essential Functions
17
TLS termination
Client
authentication
Fine-grained
access control
Request routing
Rate limiting Load balancing
Service discovery
of backends
Request/response
manipulation
97.
98. API A
API B
API C
API A
API B
API C
Edge Gateway
19
API A
API B
API C
• TLS termination
• Client authentication
• Authorization
• Request routing
• Rate limiting
• Load balancing
• Request/response manipulation
99. Edge Gateway
20
API A
API B
API C
D
E
F
G
H
• TLS termination
• Client authentication
• Authorization
• Request routing
• Rate limiting
• Load balancing
• Request/response manipulation
• Façade routing
100. Two-Tier Gateway
21
API A
API B API C
D
E
F G
HSecurity Gateway
• TLS termination
• Client authentication
• Centralized logging
• Tracing injection
Routing Gateway
• Authorization
• Service discovery
• Load balancing
102. Adapt to your environment
23
• TLS termination
• Client authentication
• Fine-grained access control
• Request routing
• Rate limiting
• Load balancing
• Service discovery of backends
• Request/response manipulation
Conway’s Law
“organizations which design
systems … are constrained
to produce designs which
are copies of the
communication structures
of these organizations.”
105. F
E
Sidecar Gateway
26
E
E
F
F
D
D
D
• Outbound load balancing
• Service discovery integration
• Authentication
• Authorization?
Edge / Security Gateway
• TLS termination
• Client authentication
• Centralized logging
• Tracing injection