SlideShare une entreprise Scribd logo
1  sur  43
Distributed Firewall (DFW)
Mike Svoboda
Sr. Staff Engineer, Production Infrastructure Engineering
LinkedIn: https://www.linkedin.com/in/mikesvoboda/
Agenda for today’s discussion
Slides
5-8
Problem 1: Moving machines around in the datacenter
to create a DMZ
Slides
11-29
Problem 2: Horizontal vs Vertical Network design
Slides
30-40
What is Distributed Firewall?
Slide
42
References
Q/A Session
What Motivated LinkedIn to create
DFW?
Problem 1:
Moving machines around in the
datacenter to create DMZ
Script Kiddie Hacking: Easy network attack vectors
• Port Scanning – What is the remote device responding to?
• Enumeration- Gather information about services running on the target machine
• Data Extraction- Pull as much valuable information from the remote service as possible
Wake up call!
PHYSICALLY MOVING MACHINES IN THE DATACENTER DOESN’T SCALE!
• Providing additional layers of network security
to an application requires either physically
moving machines around in the datacenter, or
rewire network cables to create DMZs.
• DFW to complement existing network firewall
ACL systems, not replace it.
• Additional layer of security in our infrastructure
to complement existing systems.
How can we respond?
Move the machines into a DMZ behind a network
firewall, limiting network connectivity?
Production Network Security
TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET.
• Milton in the finance department clicked on a
bad email attachment and now has malware
on his workstation. Thanks Milton, appreciate
that.
• Milton’s workstation resides inside the internal
office network, which has the ability to connect
to application resources in Staging, Q/A, or
Production servers.
• Milton is one employee out of thousands.
Production Network Security
TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET.
• The hacker who has control of Milton’s machine was
able to exploit one application out of thousands, and
now has full production network access.
• The hacker can take their time analyzing various
production services, probing what responds to API
calls.
• What are the details behind the Equifax leak(s)?
Problem 2:
Horizontal vs. Vertical Network Design
The Vertical Network Architecture
• Big iron switches deployed at
the entry point of the
datacenter with uplink access
to LinkedIn’s internal
networks.
• More big iron switches at the
second and their tier of the
network.
• This image is a logical
representation, at minimum,
1k servers, upwards of 5k.
DATACENTER CLUSTERS PER ENVIRONMENT
The Vertical Network Architecture
• Each packet between environments has to flow through
thousands of rules before hitting a match.
• Firewall admin has to fit the entire security model into
their brain. This is error prone and difficult to update.
• TCAM tables are stored in hardware silicon. We’re
limited on the complexity that can be enforced.
• Hardware ASICs are fast, but expensive! Deploying
big iron costs millions of dollars!
DATACENTER CLUSTERS PER ENVIRONMENT
The Vertical Network Architecture
• Traffic shifts become problematic, as not all ACLs
exists in every CRT.
• TCAM tables can only support complexity of the
environment they host, not all “PROD” ACLs. It could
support the “PROD1” logical implementation of
linkedin.com, but not “PROD2” and “PROD3”
application fabrics.
• Human cost of hand maintaining per-application CRT
ACLs rises exponentially.
MULTIPLE CLUSTERS PER DATACENTER
The Horizontal Network Architecture
• Instead of scaling vertically, scale horizontally using
interconnected pods. Ofter multiple paths for machines
to communicate with each other.
• Allow datacenter engineering to maximize resources
• The “cluster” is too large of a deployment. Sometimes
we need to add capacity to an environment down to the
cabinet level
BUILD PODS INSTEAD OF CLUSTERS
1
Present: Altair Design
Pod 1
ToRX ToR32ToRYToR1
Pod X
ToRX ToR32ToRYToR1
Pod Y
ToRX ToR32ToRYToR1
Pod 64
ToRX ToR32ToRYToR1
Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1
Spine32SpineYSpineXSpine1 Spine1 SpineX SpineY Spine32 Spine1 SpineX SpineY Spine32Spine32SpineYSpineXSpine1
ToR
Leaf
Spine
True 5 Stage Clos Architecture (Maximum Path Length: 5 Chipsets to Minimize Latency)
Moved complexity from big boxes to our advantage, where we can manage and control!
Single SKU - Same Chipset - Uniform IO design (Bandwidth, Latency and Buffering)
Dedicated control plane, OAM and CPU for each ASIC
Non-Blocking Parallel Fabrics
1
Fabric 4
Fabric 3
Fabric 2
Fabric 1
ToR
ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer
ToR
ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer
1
5 Stage Clos
1
ToR
1
ToR
1025
ToR
1024
ToR
2048
1
Leaf
1
Leaf
256
Leaf
128
Leaf
129
1
Spine
1
Spine
128
2
Fabric
1
Fabric
4
2
Fabric
1
Fabric
4
2
ToR
1
ToR
1025
ToR
1024
ToR
2048
Leaf
1
Leaf
256
Leaf
128
Leaf
129
Fabric
1
Fabric
4
~2400 switches to support ~100,000 bare metal servers
Tier 1
ToR - Top of the Rack
Broadcom Tomahawk 32x 100G
10/25/50/100G Attachement
Regular Server Attachement 10G
Each Cabinet: 96 Dense Compute units
Half Cabinet (Leaf-Zone) 48x 10G port for servers + 4 uplinks of 50G
Full Cabinet: 2x Single ToR Zones: 48 + 48 = 96 Servers
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
Tier 2
Leaf
Broadcom Tomahawk 32x 100G
Non-Blocking Topology:
32x downlinks of 50G to serve 32 ToR
32x uplinks of 50G to provide 1:1 Over-subscription
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
Tier 3
Spine
Broadcom Tomahawk 32x 100G
Non-Blocking Topology:
64 downlinks to provide 1:1 Over-subscription
To serve 64 pods (each pod 32 ToR)
100,000 Servers: Each pod (Approximately 1550 Compute)
2
Project Falco
ToR
Server
Leaf
Spine Spine
Leaf Leaf Leaf
Spine Spine
2
Simplifying the picture
2
Simplifying the picture
Fabric1..4 Spine1
Leaf 129..132
ToR 1025
Leaf 1..4
ToR 1
Where do we put the Firewall in this architecture?
• Since we’ve scaled the network horizontally, there’s no “choke point” like we had with the vertical
network architecture
• We want to be able to mix / match security zones in the same rack to maximize space / power
• We want to have a customized security profile, down to the per server or container (network
namespace) that is unique for the deployed applications.
• Reject any requests from less trusted zones to hitting anything in PROD by default without defined
ACLs.
What is Distributed Firewall (DFW)?
What is DFW?
• Software Defined Networking (SDN)
• The applications deployed to the machine / container create a unique security profile.
• Deny incoming by default. Allow all loopback. Allow all outbound.
• Whitelist incoming application ports to accept connections from the same security zone
• Cross security zone communication requires human created ACLs based on our Topology application
deployment system
• As deployment actions happen across the datacenter, host based firewalls detect these conditions and
update their rulesets accordingly.
• The underlying firewall implementation is irrelevant
• Currently using ipfilter (iptables) and nftables on Linux, but could expand to ipf, pf, Windows, etc.
Advantages of DFW
• Fully distributed. More network I/O throughput, CPU horsepower, scales linearly.
• Datacenter space is fully utilized and physical network flattened. Logical network is quite
different.
• The VLANs the top-of-rack switch exposes determine the security zones the attached
machines belong to, not the massive vertical network cluster. Multiple security zones co-
located in the same rack. New security zones trivial to create.
• Only expose the network ports defined in our CMDB application deployment system
• Further limit accessibility to the network ports via upstream consumers by consuming the
application call graph.
• Able to canary / ramp ACL changes down to the per host or container, no big bang
modifications required.
Advantages of DFW
• Each node contains a small subset of rules vs. the CRT
network firewall containing tens of thousands.
• Authorized users can modify the firewall on-demand
without disabling it.
• Communicate keep-alive executions and notify if a
machine stops executing DFW. (hardware failure,
etc.)
• ACL complexity is localized to the service that
requires it.
New Business Capabilities
• Pre-security zone. Functionality that only host based firewalls could provide:
• Blackhole: Take an application listening on port 11016 from taking any traffic, or block
specific upstream consumers.
• QoS: sshd and Zookeeper network traffic should get priority over Apache Kafka network
I/O
• Pinhole: Based on the callgraph, only allow upstream consumers to access my
application on port 11016
New Business Capabilities
• Decommission datacenters in a controlled manner
• Allow authorized users to keep applications online, with DFW rejecting all inbound /
outbound application traffic. Allow SSH / sudo / infrastructure services to stay online.
• Conntrackd data exposed
• IPv6 support comes for free!
• Using ipset list:sets, every rule in DFW is written referencing the IPv4 and IPv6 ip
addresses / netblocks in parallel. As the company shifts from IPv4 to IPv6 and new
AAAA records come online, DFW automatically inserts these addresses and the
firewalls permit the IPv6 traffic.
ACLDB
• Centralized database that feeds sources
of truth by scraping CMDB and delivers
JSON data containers to each machine.
• JSON containers land on machines via
automated file transfers
• Intra-security zone communication
(What can communicate inside PROD?)
• Inter-security zone communication (What
is allowed to reach into PROD?)
High Level Architecture
• Only inbound traffic is filtered. All loopback / outbound traffic will always be immediately
passed.
• Network security will be enforced by filtering inbound traffic at the known destination.
• DFW rejects traffic, we do not drop traffic. The source host knows that its been rejected with
a ICMP port unreachable event.
• Build safeguards. Don’t firewall off 30k machines and become unrecoverable without pulling
power to the whole datacenter.
High Level Architecture
• Pre-security zone: Functionalities referenced on “new business capabilities slide”
• Security zone: Mimic the existing network firewalls, allowing PROD  PROD
communication. Rules are written as “accept from any” as we jump into a new
iptables chain once the source machine resides in PROD netblocks.
• Post security zone: Inter-security zone rules maintained in ACLDB. “Allow 5x
machines in ZONE1 to hit 10x machines in PROD…
• The rules placed in /etc/sysconfig/iptables and /etc/sysconfig/ip6tables are identical, since
they reference list:set ipsets, which in turn reference the necessary IPv4 and IPv6 sub-ipsets.
DFW is stateless. Precompute the ruleset, every execution
• Every execution of DFW builds the iptables / ipset configuration from scratch, compares to
live state in the kernel
• Current state of iptables / ipsets does not matter.
• Users could flush ruleset, reboot, add or delete entries, destroy or create ipsets. We
use auditd to monitor for setsockopt() system calls for unexpected rule insertions.
• Next execution of DFW, we converge from whatever current state is to the intended state
either scheduled or on discovery of setsockopt() calls.
• Debugging is simple. Firewall issues after DFW execution is not from a “previous state
issue.” Current state needs a behavior change for things to work.
• Whitelist network ports, is the source machine connecting to me in my security zone, or do
I need to add a rule in ACLDB to permit the traffic?
Work with the humans, not against them
• Since automation is constantly enforcing its known good state, we need to plan for emergency
situations where Authorized Users has to modify the firewall on demand
• Example 1: Authorized users needs to whitelist a network port ASAP to stop an outage
• Authorized user adds a destination network port to a specific ipset, which immediately starts whitelisting that
traffic within the same security zone (PROD  PROD port 9000). Allows time to register the network port
with the application in our CMDB application deployment system. DFW cleans this ipset automatically.
• Example 2: Authorized users wants to blackhole an application without stopping / shutting it down
• Shutting down an application corrupts memory state, which could be useful for developers to debug. Adding
destination port 9000 into this ipset allows the application to remain online, but reject all incoming requests.
• Example 3: Deployment actions
• Chicken and egg – DFW depends on application deployment system to determine mapping to servers. At
deployment time, a ipset gets modified to immediately whitelist the traffic. DFW cleans this ipset
IPTABLES RULES REFERENCE TYPICALLY EMPTY IPSETS, EXPECTING HUMAN
INPUT.
References:
• Altair Network Design: https://www.slideshare.net/shawnzandi/linkedin-openfabric-project-
interop-2017
• Eng blog post on Altair: https://engineering.linkedin.com/blog/2016/03/project-altair--the-
evolution-of-linkedins-data-center-network
• Programmable Data Center: https://engineering.linkedin.com/blog/2017/03/linkedin_s-approach-
to-a-self-defined-programmable-data-center
• Facebook’s Spine and Leaf: https://code.facebook.com/posts/360346274145943/introducing-
data-center-fabric-the-next-generation-facebook-data-center-network/
• Facebook’s Spine and Leaf: https://www.youtube.com/watch?v=mLEawo6OzFM
• Milton from Office Space: http://www.imdb.com/title/tt0151804/
Q/A session
Production ready implementation / Demo the technology:
Zener: https://www.zener.io/lisa17
BOF: Distributed, Software Defined Security in the Modern Data Center
Thursday, November 2, 9:00 pm–10:00 pm, Marina Room
LinkedIn: https://www.linkedin.com/in/mikesvoboda
2017 - LISA - LinkedIn's Distributed Firewall (DFW)

Contenu connexe

Tendances

What CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBDWhat CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBD
ShapeBlue
 

Tendances (20)

Packet Walk(s) In Kubernetes
Packet Walk(s) In KubernetesPacket Walk(s) In Kubernetes
Packet Walk(s) In Kubernetes
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
 
MySQL Scalability and Reliability for Replicated Environment
MySQL Scalability and Reliability for Replicated EnvironmentMySQL Scalability and Reliability for Replicated Environment
MySQL Scalability and Reliability for Replicated Environment
 
Breakout - Airheads Macau 2013 - Top 10 Tips from Aruba TAC
Breakout - Airheads Macau 2013 - Top 10 Tips from Aruba TAC Breakout - Airheads Macau 2013 - Top 10 Tips from Aruba TAC
Breakout - Airheads Macau 2013 - Top 10 Tips from Aruba TAC
 
Top 10 Reasons Why F5 Makes Sense
Top 10 Reasons Why F5 Makes SenseTop 10 Reasons Why F5 Makes Sense
Top 10 Reasons Why F5 Makes Sense
 
Kafka used at scale to deliver real-time notifications
Kafka used at scale to deliver real-time notificationsKafka used at scale to deliver real-time notifications
Kafka used at scale to deliver real-time notifications
 
What CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBDWhat CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBD
 
MySQL operator for_kubernetes
MySQL operator for_kubernetesMySQL operator for_kubernetes
MySQL operator for_kubernetes
 
Automated master failover
Automated master failoverAutomated master failover
Automated master failover
 
[2018] MySQL 이중화 진화기
[2018] MySQL 이중화 진화기[2018] MySQL 이중화 진화기
[2018] MySQL 이중화 진화기
 
Multi cluster, multitenant and hierarchical kafka messaging service slideshare
Multi cluster, multitenant and hierarchical kafka messaging service   slideshareMulti cluster, multitenant and hierarchical kafka messaging service   slideshare
Multi cluster, multitenant and hierarchical kafka messaging service slideshare
 
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
 
How Netflix Tunes EC2 Instances for Performance
How Netflix Tunes EC2 Instances for PerformanceHow Netflix Tunes EC2 Instances for Performance
How Netflix Tunes EC2 Instances for Performance
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutions
 
OSS Presentation Arista
OSS Presentation AristaOSS Presentation Arista
OSS Presentation Arista
 
MySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & GrafanaMySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & Grafana
 
Vault Open Source vs Enterprise v2
Vault Open Source vs Enterprise v2Vault Open Source vs Enterprise v2
Vault Open Source vs Enterprise v2
 
Galera Cluster Best Practices for DBA's and DevOps Part 1
Galera Cluster Best Practices for DBA's and DevOps Part 1Galera Cluster Best Practices for DBA's and DevOps Part 1
Galera Cluster Best Practices for DBA's and DevOps Part 1
 

Similaire à 2017 - LISA - LinkedIn's Distributed Firewall (DFW)

Distributech_Presentation DTECH_2013
Distributech_Presentation DTECH_2013Distributech_Presentation DTECH_2013
Distributech_Presentation DTECH_2013
Dorian Hernandez
 
Gab 2015 aymeric weinbach azure iot
Gab   2015 aymeric weinbach azure iot Gab   2015 aymeric weinbach azure iot
Gab 2015 aymeric weinbach azure iot
Aymeric Weinbach
 

Similaire à 2017 - LISA - LinkedIn's Distributed Firewall (DFW) (20)

Introduction to SDN
Introduction to SDNIntroduction to SDN
Introduction to SDN
 
Distributech_Presentation DTECH_2013
Distributech_Presentation DTECH_2013Distributech_Presentation DTECH_2013
Distributech_Presentation DTECH_2013
 
Gab 2015 aymeric weinbach azure iot
Gab   2015 aymeric weinbach azure iot Gab   2015 aymeric weinbach azure iot
Gab 2015 aymeric weinbach azure iot
 
Introductionto SDN
Introductionto SDN Introductionto SDN
Introductionto SDN
 
Introduction to Software Defined Networking (SDN)
Introduction to Software Defined Networking (SDN)Introduction to Software Defined Networking (SDN)
Introduction to Software Defined Networking (SDN)
 
LinkedIn OpenFabric Project - Interop 2017
LinkedIn OpenFabric Project - Interop 2017LinkedIn OpenFabric Project - Interop 2017
LinkedIn OpenFabric Project - Interop 2017
 
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014
 
Ccna 4 Chapter 1 V4.0 Answers
Ccna 4 Chapter 1 V4.0 AnswersCcna 4 Chapter 1 V4.0 Answers
Ccna 4 Chapter 1 V4.0 Answers
 
OVNC 2015-Software-Defined Networking: Where Are We Today?
OVNC 2015-Software-Defined Networking: Where Are We Today?OVNC 2015-Software-Defined Networking: Where Are We Today?
OVNC 2015-Software-Defined Networking: Where Are We Today?
 
Helen Tabunshchyk "Handling large amounts of traffic on the Edge"
Helen Tabunshchyk "Handling large amounts of traffic on the Edge"Helen Tabunshchyk "Handling large amounts of traffic on the Edge"
Helen Tabunshchyk "Handling large amounts of traffic on the Edge"
 
Zero Trust for Private 5G and Edge
Zero Trust for Private 5G and EdgeZero Trust for Private 5G and Edge
Zero Trust for Private 5G and Edge
 
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
 
ICS Security 101 by Sandeep Singh
ICS Security 101 by Sandeep SinghICS Security 101 by Sandeep Singh
ICS Security 101 by Sandeep Singh
 
OWASP Appsec USA 2014 Talk "Pwning the Pawns with Wihawk" Santhosh Kumar
OWASP Appsec USA 2014 Talk "Pwning the Pawns with Wihawk" Santhosh Kumar OWASP Appsec USA 2014 Talk "Pwning the Pawns with Wihawk" Santhosh Kumar
OWASP Appsec USA 2014 Talk "Pwning the Pawns with Wihawk" Santhosh Kumar
 
Software Defined Networking: Primer
Software Defined Networking: Primer Software Defined Networking: Primer
Software Defined Networking: Primer
 
LTE Testing
LTE TestingLTE Testing
LTE Testing
 
Cloud Networking Trends
Cloud Networking TrendsCloud Networking Trends
Cloud Networking Trends
 
6° Sessione VMware NSX: la piattaforma di virtualizzazione della rete per il ...
6° Sessione VMware NSX: la piattaforma di virtualizzazione della rete per il ...6° Sessione VMware NSX: la piattaforma di virtualizzazione della rete per il ...
6° Sessione VMware NSX: la piattaforma di virtualizzazione della rete per il ...
 
Automated Deployment and Management of Edge Clouds
Automated Deployment and Management of Edge CloudsAutomated Deployment and Management of Edge Clouds
Automated Deployment and Management of Edge Clouds
 
Nozomi Networks SCADAguardian - Data-Sheet
Nozomi Networks SCADAguardian - Data-SheetNozomi Networks SCADAguardian - Data-Sheet
Nozomi Networks SCADAguardian - Data-Sheet
 

Dernier

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

2017 - LISA - LinkedIn's Distributed Firewall (DFW)

  • 1. Distributed Firewall (DFW) Mike Svoboda Sr. Staff Engineer, Production Infrastructure Engineering LinkedIn: https://www.linkedin.com/in/mikesvoboda/
  • 2. Agenda for today’s discussion Slides 5-8 Problem 1: Moving machines around in the datacenter to create a DMZ Slides 11-29 Problem 2: Horizontal vs Vertical Network design Slides 30-40 What is Distributed Firewall? Slide 42 References Q/A Session
  • 3. What Motivated LinkedIn to create DFW?
  • 4. Problem 1: Moving machines around in the datacenter to create DMZ
  • 5. Script Kiddie Hacking: Easy network attack vectors • Port Scanning – What is the remote device responding to? • Enumeration- Gather information about services running on the target machine • Data Extraction- Pull as much valuable information from the remote service as possible
  • 6. Wake up call! PHYSICALLY MOVING MACHINES IN THE DATACENTER DOESN’T SCALE! • Providing additional layers of network security to an application requires either physically moving machines around in the datacenter, or rewire network cables to create DMZs. • DFW to complement existing network firewall ACL systems, not replace it. • Additional layer of security in our infrastructure to complement existing systems. How can we respond? Move the machines into a DMZ behind a network firewall, limiting network connectivity?
  • 7. Production Network Security TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET. • Milton in the finance department clicked on a bad email attachment and now has malware on his workstation. Thanks Milton, appreciate that. • Milton’s workstation resides inside the internal office network, which has the ability to connect to application resources in Staging, Q/A, or Production servers. • Milton is one employee out of thousands.
  • 8. Production Network Security TREAT THE PRODUCTION NETWORK AS IF IT’S THE PUBLIC INTERNET. • The hacker who has control of Milton’s machine was able to exploit one application out of thousands, and now has full production network access. • The hacker can take their time analyzing various production services, probing what responds to API calls. • What are the details behind the Equifax leak(s)?
  • 9. Problem 2: Horizontal vs. Vertical Network Design
  • 10. The Vertical Network Architecture • Big iron switches deployed at the entry point of the datacenter with uplink access to LinkedIn’s internal networks. • More big iron switches at the second and their tier of the network. • This image is a logical representation, at minimum, 1k servers, upwards of 5k. DATACENTER CLUSTERS PER ENVIRONMENT
  • 11. The Vertical Network Architecture • Each packet between environments has to flow through thousands of rules before hitting a match. • Firewall admin has to fit the entire security model into their brain. This is error prone and difficult to update. • TCAM tables are stored in hardware silicon. We’re limited on the complexity that can be enforced. • Hardware ASICs are fast, but expensive! Deploying big iron costs millions of dollars! DATACENTER CLUSTERS PER ENVIRONMENT
  • 12. The Vertical Network Architecture • Traffic shifts become problematic, as not all ACLs exists in every CRT. • TCAM tables can only support complexity of the environment they host, not all “PROD” ACLs. It could support the “PROD1” logical implementation of linkedin.com, but not “PROD2” and “PROD3” application fabrics. • Human cost of hand maintaining per-application CRT ACLs rises exponentially. MULTIPLE CLUSTERS PER DATACENTER
  • 13. The Horizontal Network Architecture • Instead of scaling vertically, scale horizontally using interconnected pods. Ofter multiple paths for machines to communicate with each other. • Allow datacenter engineering to maximize resources • The “cluster” is too large of a deployment. Sometimes we need to add capacity to an environment down to the cabinet level BUILD PODS INSTEAD OF CLUSTERS
  • 14. 1 Present: Altair Design Pod 1 ToRX ToR32ToRYToR1 Pod X ToRX ToR32ToRYToR1 Pod Y ToRX ToR32ToRYToR1 Pod 64 ToRX ToR32ToRYToR1 Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1Leaf4Leaf3Leaf2Leaf1 Spine32SpineYSpineXSpine1 Spine1 SpineX SpineY Spine32 Spine1 SpineX SpineY Spine32Spine32SpineYSpineXSpine1 ToR Leaf Spine True 5 Stage Clos Architecture (Maximum Path Length: 5 Chipsets to Minimize Latency) Moved complexity from big boxes to our advantage, where we can manage and control! Single SKU - Same Chipset - Uniform IO design (Bandwidth, Latency and Buffering) Dedicated control plane, OAM and CPU for each ASIC
  • 15. Non-Blocking Parallel Fabrics 1 Fabric 4 Fabric 3 Fabric 2 Fabric 1 ToR ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer ToR ServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServerServer
  • 23. Tier 1 ToR - Top of the Rack Broadcom Tomahawk 32x 100G 10/25/50/100G Attachement Regular Server Attachement 10G Each Cabinet: 96 Dense Compute units Half Cabinet (Leaf-Zone) 48x 10G port for servers + 4 uplinks of 50G Full Cabinet: 2x Single ToR Zones: 48 + 48 = 96 Servers 2 Project Falco ToR Server Leaf Spine Spine Leaf Leaf Leaf Spine Spine
  • 24. Tier 2 Leaf Broadcom Tomahawk 32x 100G Non-Blocking Topology: 32x downlinks of 50G to serve 32 ToR 32x uplinks of 50G to provide 1:1 Over-subscription 2 Project Falco ToR Server Leaf Spine Spine Leaf Leaf Leaf Spine Spine
  • 25. Tier 3 Spine Broadcom Tomahawk 32x 100G Non-Blocking Topology: 64 downlinks to provide 1:1 Over-subscription To serve 64 pods (each pod 32 ToR) 100,000 Servers: Each pod (Approximately 1550 Compute) 2 Project Falco ToR Server Leaf Spine Spine Leaf Leaf Leaf Spine Spine
  • 27. 2 Simplifying the picture Fabric1..4 Spine1 Leaf 129..132 ToR 1025 Leaf 1..4 ToR 1
  • 28.
  • 29. Where do we put the Firewall in this architecture? • Since we’ve scaled the network horizontally, there’s no “choke point” like we had with the vertical network architecture • We want to be able to mix / match security zones in the same rack to maximize space / power • We want to have a customized security profile, down to the per server or container (network namespace) that is unique for the deployed applications. • Reject any requests from less trusted zones to hitting anything in PROD by default without defined ACLs.
  • 30. What is Distributed Firewall (DFW)?
  • 31. What is DFW? • Software Defined Networking (SDN) • The applications deployed to the machine / container create a unique security profile. • Deny incoming by default. Allow all loopback. Allow all outbound. • Whitelist incoming application ports to accept connections from the same security zone • Cross security zone communication requires human created ACLs based on our Topology application deployment system • As deployment actions happen across the datacenter, host based firewalls detect these conditions and update their rulesets accordingly. • The underlying firewall implementation is irrelevant • Currently using ipfilter (iptables) and nftables on Linux, but could expand to ipf, pf, Windows, etc.
  • 32. Advantages of DFW • Fully distributed. More network I/O throughput, CPU horsepower, scales linearly. • Datacenter space is fully utilized and physical network flattened. Logical network is quite different. • The VLANs the top-of-rack switch exposes determine the security zones the attached machines belong to, not the massive vertical network cluster. Multiple security zones co- located in the same rack. New security zones trivial to create. • Only expose the network ports defined in our CMDB application deployment system • Further limit accessibility to the network ports via upstream consumers by consuming the application call graph. • Able to canary / ramp ACL changes down to the per host or container, no big bang modifications required.
  • 33. Advantages of DFW • Each node contains a small subset of rules vs. the CRT network firewall containing tens of thousands. • Authorized users can modify the firewall on-demand without disabling it. • Communicate keep-alive executions and notify if a machine stops executing DFW. (hardware failure, etc.) • ACL complexity is localized to the service that requires it.
  • 34. New Business Capabilities • Pre-security zone. Functionality that only host based firewalls could provide: • Blackhole: Take an application listening on port 11016 from taking any traffic, or block specific upstream consumers. • QoS: sshd and Zookeeper network traffic should get priority over Apache Kafka network I/O • Pinhole: Based on the callgraph, only allow upstream consumers to access my application on port 11016
  • 35. New Business Capabilities • Decommission datacenters in a controlled manner • Allow authorized users to keep applications online, with DFW rejecting all inbound / outbound application traffic. Allow SSH / sudo / infrastructure services to stay online. • Conntrackd data exposed • IPv6 support comes for free! • Using ipset list:sets, every rule in DFW is written referencing the IPv4 and IPv6 ip addresses / netblocks in parallel. As the company shifts from IPv4 to IPv6 and new AAAA records come online, DFW automatically inserts these addresses and the firewalls permit the IPv6 traffic.
  • 36. ACLDB • Centralized database that feeds sources of truth by scraping CMDB and delivers JSON data containers to each machine. • JSON containers land on machines via automated file transfers • Intra-security zone communication (What can communicate inside PROD?) • Inter-security zone communication (What is allowed to reach into PROD?)
  • 37. High Level Architecture • Only inbound traffic is filtered. All loopback / outbound traffic will always be immediately passed. • Network security will be enforced by filtering inbound traffic at the known destination. • DFW rejects traffic, we do not drop traffic. The source host knows that its been rejected with a ICMP port unreachable event. • Build safeguards. Don’t firewall off 30k machines and become unrecoverable without pulling power to the whole datacenter.
  • 38. High Level Architecture • Pre-security zone: Functionalities referenced on “new business capabilities slide” • Security zone: Mimic the existing network firewalls, allowing PROD  PROD communication. Rules are written as “accept from any” as we jump into a new iptables chain once the source machine resides in PROD netblocks. • Post security zone: Inter-security zone rules maintained in ACLDB. “Allow 5x machines in ZONE1 to hit 10x machines in PROD… • The rules placed in /etc/sysconfig/iptables and /etc/sysconfig/ip6tables are identical, since they reference list:set ipsets, which in turn reference the necessary IPv4 and IPv6 sub-ipsets.
  • 39. DFW is stateless. Precompute the ruleset, every execution • Every execution of DFW builds the iptables / ipset configuration from scratch, compares to live state in the kernel • Current state of iptables / ipsets does not matter. • Users could flush ruleset, reboot, add or delete entries, destroy or create ipsets. We use auditd to monitor for setsockopt() system calls for unexpected rule insertions. • Next execution of DFW, we converge from whatever current state is to the intended state either scheduled or on discovery of setsockopt() calls. • Debugging is simple. Firewall issues after DFW execution is not from a “previous state issue.” Current state needs a behavior change for things to work. • Whitelist network ports, is the source machine connecting to me in my security zone, or do I need to add a rule in ACLDB to permit the traffic?
  • 40. Work with the humans, not against them • Since automation is constantly enforcing its known good state, we need to plan for emergency situations where Authorized Users has to modify the firewall on demand • Example 1: Authorized users needs to whitelist a network port ASAP to stop an outage • Authorized user adds a destination network port to a specific ipset, which immediately starts whitelisting that traffic within the same security zone (PROD  PROD port 9000). Allows time to register the network port with the application in our CMDB application deployment system. DFW cleans this ipset automatically. • Example 2: Authorized users wants to blackhole an application without stopping / shutting it down • Shutting down an application corrupts memory state, which could be useful for developers to debug. Adding destination port 9000 into this ipset allows the application to remain online, but reject all incoming requests. • Example 3: Deployment actions • Chicken and egg – DFW depends on application deployment system to determine mapping to servers. At deployment time, a ipset gets modified to immediately whitelist the traffic. DFW cleans this ipset IPTABLES RULES REFERENCE TYPICALLY EMPTY IPSETS, EXPECTING HUMAN INPUT.
  • 41. References: • Altair Network Design: https://www.slideshare.net/shawnzandi/linkedin-openfabric-project- interop-2017 • Eng blog post on Altair: https://engineering.linkedin.com/blog/2016/03/project-altair--the- evolution-of-linkedins-data-center-network • Programmable Data Center: https://engineering.linkedin.com/blog/2017/03/linkedin_s-approach- to-a-self-defined-programmable-data-center • Facebook’s Spine and Leaf: https://code.facebook.com/posts/360346274145943/introducing- data-center-fabric-the-next-generation-facebook-data-center-network/ • Facebook’s Spine and Leaf: https://www.youtube.com/watch?v=mLEawo6OzFM • Milton from Office Space: http://www.imdb.com/title/tt0151804/
  • 42. Q/A session Production ready implementation / Demo the technology: Zener: https://www.zener.io/lisa17 BOF: Distributed, Software Defined Security in the Modern Data Center Thursday, November 2, 9:00 pm–10:00 pm, Marina Room LinkedIn: https://www.linkedin.com/in/mikesvoboda

Notes de l'éditeur

  1. This doesn’t scale, and its just one application out of thousands.
  2. A front end application hacked, granting direct network access to backend databases or other application level APIs that pulled from the backend databases? Could isolation of those middle tier API applications or backend databases prevented identity theft? How many thousands / millions of times have there been data leaks at organizations Code is often written just well enough to become operational, or address scaling issues. Security considerations can be a second or third (or lower priority) Script kiddies can access your internal application resources, what can state sponsored hackers do? What can motivated organizations with $$$ do? The highly capable?
  3. Network firewall, on the CRT, is a bottleneck for traffic coming in / out of the datacenter core. All firewall ACLs have to be processed at the CRT
  4. Central single point of failure. Failover to secondary CRT highly impacting to production traffic. Error in ACL promotion can affect thousands of machines behind the CRT!
  5. Each new datacenter facility or “cluster” could trigger thousands of ACL updates on other CRTs.
  6. Power, rack space, cooling, and network is a lot more expensive than the actual machine using it!!! We are wasting millions of dollars in underutilized resources! . We are abandoning space in the datacenter because we only have space for one or two additional cabinets, or have maxed out power / cooling.
  7. Four parallel fabrics each cabinet/rack connects to all for planes thru its 4 leaf switches. They are color coded to understand the connection better.
  8. designed by Charles Clos in 1952, which represents multi-stage telephone switching systems.
  9. and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
  10. and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
  11. and we chose to build our network on top of merchant silicon. a very common strategy for mega scaled data center.
  12. Machines from ZONE1, ZONE2, and PROD could all co-exist in the same physical rack, connected to the same TOR Only allow frontend webservers to hit midtier application servers on port 9000, reject all other requests from PROD. Only expose network paths to the APIs your applications provide to the upstream consumers!
  13. Users / hackers can’t spin up netcat or sshd and start listening on some random port. Applications are not exposed to the network until their defined network ports are registered into CMDB. Keeps DEV honest – new traffic flows can’t be introduced unbeknown to Operations without registration.
  14. We wont block legitimate operational tasks. Easily auditable Packets destined for other applications to not have to traverse irrelevant ACLs in network TCAM tables.
  15. Restricts access to an application inside the security zone (No open PROD communication) Each application can create a unique security profile. We aren’t restricted to large concepts like “PROD” security zones or “DMZs”.
  16. Permits immediate rollback in case of unexpected service shutdowns. When machines remain on the network, we retain automation control and auditability. Enhance the callgraph, and monitor incoming connections via conntrackd. No longer limited to expensive / unreliable netstat –an snapshots.
  17. Some data is shared across the entire security zone (Allow bastion hosts to ssh into PROD), others are the unique attributes per machine. Allow machines in ZONE1 access to hit Voldemort in PROD, allow desktops access to SSH to hosts in ZONE2.
  18. Debugging DFW rejections will always happen on the destination node. If we filtered outbound traffic, it becomes too complex to debug rejection events. Application X  Application Y isn’t working. Application Y doesn’t see inbound traffic. I connect to machine Y, not knowing which machines are supposed to send data to it… Where is my traffic being dropped? Somewhere out in PROD? Application X could be hundreds or thousands of machines. Debugging Application X  Application Y becomes simple. There are only two rejection reasons. The source host in Application X doesn’t reside in my security zone. The network port Application Y uses hasn’t been registered with Topology to whitelist the incoming traffic.
  19. Ipsets contain the “why” we accepted or rejected traffic. The rules in the iptables file is the ”high level objective” for what we are trying to achieve. 99.999% of changes are made in the ipsets, not iptables rules. As machines move in / out of applications or netblocks update, IPv6 comes online, ipset membership changes automatically. The IPv6 support simplifies IPv6 migration so we don’t have to burn network firewall TCAM memory space in silicon, duplicating the existing IPv4 ruleset.
  20. Creates “temporary ipsets” and uses ”ipset swap <foo> <tmp_foo>” to promote membership change if not identical. (Adding or removing specific entries not calculated) Ipset swap is atomic. Adding 100x new ip addresses, ports, or netblocks to the firewall is an all-or-nothing operation at one instant. Most change in DFW happens with ipset membership changing, not iptables template file expansion changing. CFEngine template expands iptables / ip6tables and executes iptables-restore < /etc/sysconfig/iptables to enforce in-memory state remains what we expanded.