The document discusses VXLAN gateways and how they connect virtual and physical networks. It provides details on Juniper QFX5100 VXLAN gateways and their integration with NSX, including how they dynamically learn virtual networks via OVSDB, handle multidestination traffic, and store MAC address tables. The document also shows configurations and statuses when viewing the integration through NSX and Network Director management tools.
2. Software-Defined Data Center
2
Compute
Physical
Hardware
Policy-based
Management &
Automation Cloud Automation Cloud Operations Cloud Business
Software-Defined Data Center
Private
Clouds
Public
Clouds
Hybrid Cloud
VMware &
vCloud Data Center Partners
Virtualized Infrastructure
Abstract & Pool
Compute Abstraction =
Server Virtualization
Network
Network Abstraction =
Virtual Networking
Storage
Storage Abstraction =
Software-Defined
Storage
Applications
End User Computing
Desktop Mobile
Virtual Workspace
Modern SaaSTraditional
?
3. SDN – sieć definiowana programowo
The Blind Men and the Elephant
John Godfrey Saxe (1816-1887)
4. SDN
SDN to podejście do sieci komputerowej, która pozwala
administratorom sieci na zarządzanie usługami
sieciowymi poprzez abstrakcję funkcjonalności niższego
poziomu.
6. Podejście reaktywne „hop-by hop”(openflow)
First packet of every flow
is punted to controller.
Controller reactively programs every flow on every
switch on path.
How does controller reach switch?
Per-tenant state in physical network:
Switches contain many flows.
Switches must support OpenFlow
Scalability? Fork-lift upgrade?
„Nie są bezpośrednio związane z wirtualizacją sieci. Przede wszystkim koncentrują się na
kontrolowaniu funkcji przełączników związanych z aplikacjami, a nie tworzeniu wirtualnej sieci o
niezależnej topologii, która kontroluje warstwy L2 i L3”
7. Podejście proaktywne – sieci nakładkowe
High scalability. Evolutionary.
Controller proactively programs virtual overlay
switches only.
Existing protocols establish IP fabric
underlay.
Packets are not punted to controller.
No per-tenant state in physical network:
Switches only know physical servers.
Underlay network uses existing protocols.
Topology change does not affect service layer.
11. Architektury sieciowe Juniper
<250 Servers < 1500 Servers Beyond<6000 Servers
Junos Fusion
Single touch provisioning,
scalable multi-tier fabric.
Virtual
Chassis
Single touch
provisioning
for small data
centers.
Virtual Chassis Fabric
Single touch
provisioning for mid-size
data centers.
Qfabric
Flat, single tier & highly
scalable network solution.
Open Clos & L2/3 Fabrics
Standards based, open
architectures. Network Director
provisioning.
Standards-basedJuniperValue-add
12. VMware NSX
Consumption
Model
• Self-service portal
• Cloud management
• vRealize Automation
Data
Plane
NSX Edge
ESXi Hypervisor Kernel Modules
FirewallDistributed
Logical Router
VXLAN
Distributed Services
• VMware NSX logical switch
• NSX Edge gateway
• Routing and advanced
services
Management
Plane
NSX Manager
• Single point of configuration
• REST API and UI interface
vCenter Server
Control
Plane
NSX Controller Cluster • Manages logical network’s
run-time state
• Does not sit in the data path
• Control plane protocol
NSX Logical Router
Control VM
User World Agent
(netcpa)
Message Bus Agent
(vsfwd)
Distributed
Switch
…
… • Physical network of your
choice
Physical
Network
13. NSX i połączenie z siecią fizyczną
13
VXLAN VLAN
x86-based forwarding
Physical
Workloads
VXLAN VLAN
Physical
Workloads
Leverages x86
Highest density
and throughput
with partner HW
HW VTEP
• Software-based VXLAN gateways
• Redundant and scalable
• Wire speed 10/40/100G VXLAN gateway
• Hardware-based L2 VXLAN gateway
• High availability with gateway redundancy
• Single point of integration with VMware NSX for hardware gateway
• Available with VMware NSX 6.2
14. Tunnels są jak kable sieciowe.
14
Third party
hardware
Controller
(Copper Cable)
Virtual
VXLAN“Cable”
VXLAN
“Cable”
VXLAN
“Cable”
World
World
OVSDB
15. Brama sprzętowa
OVSDB
server
OVSDB
client
NSX Controller
HW Partner Switches
HV HV
VTEP
VTEP
OVSDB
Management
protocol
Management
(partner specific)
Hypervisors
Data Plane:
VXLAN
between
VTEPs
VTEP VTEP
˥ Hardware gateways enable you to connect any physical servers or appliances to a VXLAN.
˥ The hardware gateway integration requires switching hardware that is VTEP capable and the
deployment of an OVSDB server.
˥ OVSDB is extensible and schema-based, and it is not reliant on multicast.
16. OVSDB - RFC 7047
Open vSwitch Database Management Protocol
21. Multidestination Traffic Handling
• Hardware VTEPs are not responsible for replicating multidestination traffic.
• Several hypervisors are configured to take a replication service node (RSN) role.
• Replication of multidestination traffic originated from hardware VTEP is load balanced across the
multiple RSNs.
• RSNs are protected by bidirectional
forwarding detection (BFD) sessions
from hardware VTEP.
HVHV
HV
RSN
RSN
26. JUNIPER VXLAN GATEWAYS
Features QFX5100 EX9200/MX
L2/L3 GW function L2 only
L2 and L3 inter-VXLAN
routing
Integration with NSX OVSDB Yes Yes
Dynamic VLAN creation Yes No
Supported architectures
Standalone/L3 fabric, VC,
VCF
Standalone/L3 fabric
Performance Line rate Line rate
Mapping to MPLS/VPLS No Yes
27. Network Gateway with VXLAN Tunnel
Endpoint
QFX’s VTEP integration with NSX:
• OVSDB to NSX to learn about virtual networks
• Understands VXLAN data plane in hardware
QFX’s VTEP Capabilities
controller
VM VM VM VM VM VM VM VM VM
OVS, vSwitch
KVM, Xen, ESXi
OVS, vSwitch
KVM, Xen, ESXi
OVS, vSwitch
KVM, Xen, ESXi
VN VN VN
IP, VCF, QF or Junos Fusion
QFX
Network
Appliances
(eg SRX)
Bare Metal
(eg HPC)
VLAN
VM VM
Non-NSX
Virtual Compute
QFX can reinsert the VXLAN overlay traffic
back into the normal VLANs. Use cases:
• Connect to network appliances
• Connect to bare metal like HPC or DBs
• Connect to non-NSX virtual compute servers
Demo VTEP:
https://lnkd.in/eF9tT7q
VTEP support GA on :
QFX with Junos OS 14.1X30-D26
Tested with NSX-V 6.2.1
28. OVSDB Integration: Third-Party HW
Gateways Consumption
˥ To integrate third-party hardware gateways:
1. Create a logical switch and attach virtual machines.
2. Register the third-party L2 gateway (OVSDB server).
3. Create a replication node cluster.
4. Through a hardware switch port, attach the third-party L2 gateway to the logical
switch.
VM1 VM2
LS – VNI
5001
VLAN
100
4
1 3
2
Replication
Nodes
29. Konfiguracja po stronie Juniper QFX5100 VXLAN VTEP
set interfaces x/0/0 unit 0 family ethernet-switching
set switch-options vtep-source-interface lo0.0
set switch-options vxlan-ovsdb-managed
set protocols ovsdb interfaces xe-3/0/2
set protocols ovsdb controller 192.168.100.216
*** No need to define logical switch or BD, will be dynamically created by NSX ***
JUNOS VXLAN CLI – Juniper QFX i NSX
30. Konfiguracja po stronie NSX
root@VCF-1> show ovsdb virtual-tunnel-end-point
Encapsulation Ip Address Num of MAC's
VXLAN over IPv4 10.0.0.2 3
VXLAN over IPv4 10.11.10.51 2
VXLAN over IPv4 10.11.10.52 5root@VCF-1> show ovsdb logical-switch
Logical switch information:
Logical Switch Name: a35fe7f7-fe82-37b4-b69a-0af4244d1fca
Flags: Created by both
VNI: 5000
Num of Remote MAC: 5
Num of Local MAC: 3
31. Stan połączeń do kontrolerów NSX
root@VCF-1> show ovsdb controller
VTEP controller information:
Controller IP address: 192.168.100.216
Controller protocol: ssl
Controller port: 6640
Controller connection: up
Controller seconds-since-connect: 215121
Controller seconds-since-disconnect: 0
Controller connection status: active
Controller IP address: 192.168.100.217
Controller protocol: ssl
Controller port: 6640
Controller connection: up
Controller seconds-since-connect: 215115
Controller seconds-since-disconnect: 0
Controller connection status: idle
Controller IP address: 192.168.100.218
Controller protocol: ssl
Controller port: 6640
Controller connection: up
Controller seconds-since-connect: 215109
Controller seconds-since-disconnect: 0
Controller connection status: idle
31
32. Tablica MAC
root@VCF-1> show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC, O -
ovsdb MAC)
Ethernet switching table : 6 entries, 2 learned
Routing instance : default-switch
Vlan MAC MAC Age Logical
name address flags interface
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 00:15:58:2c:ae:5e D - ge-3/0/22
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 00:50:56:82:24:0d SO - vtep.32770
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 00:50:56:82:83:d3 SO - vtep.32770
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 00:50:56:82:95:b1 SO - vtep.32769
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 00:50:56:82:c3:e5 SO - vtep.32769
a35fe7f7-fe82-37b4-b69a-0af4244d1fca 4c:96:14:e9:f9:a1 D - xe-2/0/46
32
33. Adresy MAC w OVSDB
root@VCF-1> show ovsdb mac
Logical Switch Name: a35fe7f7-fe82-37b4-b69a-0af4244d1fca
Mac IP Encapsulation Vtep
Address Address Address
ff:ff:ff:ff:ff:ff 0.0.0.0 Vxlan over Ipv4 10.0.0.2
00:15:58:2c:ae:5e 0.0.0.0 Vxlan over Ipv4 10.0.0.2
4c:96:14:e9:f9:a1 0.0.0.0 Vxlan over Ipv4 10.0.0.2
00:50:56:82:24:0d 0.0.0.0 Vxlan over Ipv4 10.11.10.52
00:50:56:82:83:d3 0.0.0.0 Vxlan over Ipv4 10.11.10.52
00:50:56:82:95:b1 0.0.0.0 Vxlan over Ipv4 10.11.10.52
00:50:56:82:c3:e5 0.0.0.0 Vxlan over Ipv4 10.11.10.51
ff:ff:ff:ff:ff:ff 0.0.0.0 Vxlan over Ipv4 10.11.10.52
33
34. Interfejsy zarządzane przez OVSDB
root@VCF-1> show ovsdb interface
Interface VLAN ID Bridge-domain
ge-3/0/22 100 a35fe7f7-fe82-37b4-b69a-0af4244d1fca
xe-2/0/0 100 a35fe7f7-fe82-37b4-b69a-0af4244d1fca
xe-2/0/46 100 a35fe7f7-fe82-37b4-b69a-0af4244d1fca
34
SDN is an approach to computer networking that allows network administrators to manage network services through abstraction of lower level functionality
„Technologie OpenFlow nie są bezpośrednio związane z wirtualizacją sieci. Przede wszystkim koncentrują się na kontrolowaniu funkcji przełączników związanych z aplikacjami, a nie tworzeniu wirtualnej sieci o niezależnej topologii, która kontroluje warstwy L2 i L3” mówi Martin Casado VMWare(Nicira)
Rather than making networking more complex, an overlay network puts applications in charge of the infrastructure
"With overlay networks you're managing from the application down rather than from the network up. And this is really the focus: ensuring the application gets the services and support from the network to be able to deliver services quickly and efficiently and, over time, more cost effectively from an operational standpoint," Casemore said.
"When you add this new layer, you end up with two simple layers instead of one very complex layer. It actually simplifies the network management piece," he said.
"The overlay movement has nothing to do with details -- it has everything to do with architecture," said Martin Casado, chief architect for networking at VMware. "In my opinion, protocols are details and relatively unimportant. Any good virtual networking solution should support as many endcaps -- what you throw on a packet -- as possible. There's nothing architecturally significant about the different protocols, so we don't have any religion about which one is better."
VXLAN Packet Format
Before we go any further, we really need to look at the packet header and how the VXLAN packet is being forwarded. We probably should talk about that at the beginning, but clear out what is the VXLAN, what is the VXLAN with NSX, at a very high level understanding. But now let’s just talk about VXLAN packet format.
So VXLAN packet adds additional 50 byte headers to the original frame. So look at this packet header, and you can see the VXLAN header itself is 8 bytes; it includes 24 bits VNI numbers and, in addition to this 8 byte VXLAN headers then you need to add the UDP headers and then you add Outer IP headers, then and that after that you need to add the Outer MAC Headers. The total altogether you’re going to have an additional 50 bytes header.
Now VNI—what is VNI? VNI is a VXLAN network identifier. It’s a 24-bit ID.
And for the UDP destination port, it’s IANA assigned VXLAN port 4789.
And for the UPD source port, the UDP source port is hash computed based on the inner packet header. So why do we need to do that? So this hash computation results and the UDP source port can help us better utilize the ECMP link in the network.
An outer IP header is an IP address—it’s a tunnel endpoint. So we’re going to talk about the VTEP in the next slides. So, with that 50 bytes additional header, so what do you need on the network side? You probably already know what to do.
Juniper VXLAN Gateways
So before we talk about any network design or how to build a network, of course we need to look at what switch can support VXLAN in a data center space. So we have the QFX5100 that’s going to support the Non-OVSDB version of VXLAN in 14.1X53-D10 release. That’s going to be released around October timeframe.
When that software is released, it will support both OVSDB and non-OVSDB version of the VXLAN, so which means you can use the IETF based VXLAN implementation or you can use that one with VMWare’s NSX controller together.
For the EX9200/MX, similarly in October timeframe, EX9200 will release 14.2R1 software release, so in that release, we will support both OVSDB and non-OVSDB version of the VXLAN.
On the MX platform, 14.1R1 is already shipping and supports the base VXLAN functions. And 14.1R2 will be shipping in August timeframe. It will support OVSDB version of the VXLAN.
And then, if you look at whether it supports Layer 2, Layer 3 VXLAN gateway functions, the QFX5100 supports Layer 2 only, and the EX9200 supports both Layer 2 and Layer 3 inter-VXLAN routing. So in August, next month, when the MX 14.1R2 ships, the MX becomes the only platform in the market that supports Layer 3 inter-VXLAN routing. And all of these platforms can integrate with MX and OVSDB.
And QFX5100 has another unique feature called “Dynamic VLAN creation”. You can see what that really means, how it can save you time, but what it really means is when QFX creates a logical switch, the VXLAN segment, that ID will be automatically created on the QFX5100, so you don’t need to manually type this VLAN name into the database. However, on the EX9200, currently you still have to do that. For the QFX5100, the VXLAN technology can be enabled when the QFX5100 is used as a standalone device with the Layer 3 fabric or using the VC or VCF architectures.
And for the MX and the EX9200, currently it’s only supporting standalone or Layer 3 fabric configurations.
QFX5100 supports 4K VXLAN instances, and for the EX9200/MX it’s 32K. And all these platforms can forward the VXLAN packets at line rate without any performance impact.
Another feature we’re going to introduce on the EX and MX first, is mapping to the MPLS/VPLS function. With that technology, you can extend the VXLAN packet into the WAN networks.
Note about “already your underlay switching” is that Nuage VTEP switches are not useful for anything other than as a VTEP
Note that QFX3500, QFX3600, and OCX do not support this VTEP functionality with NSX
Junos VXLAN CLI
So, if you have a pair of 5100, you can just use this configuration to enable the VXLAN without NSX. You just put the interface—remember, you need to enable the multicast and also you need to configure the VTEP address, and you need to configure which VLAN will be mapped to VNI, and what kind of address, what kind of IP-multicast group will be used for that VNI.
With NXS, see the configuration is reduced, you just need to configure where you can found the controller and which interface is managed by the OVSDB interface, and you also need to specify the VTEP address.
Overlay to VLAN bridges enable integration of physical workloads with virtual networks. Examples of physical workloads could be:
Servers with legacy or hard to virtualize applications
Physical servers relying on specific hardware that can’t be virtualized
Physical network & security appliances such as load balancers, firewalls, IPS, WAN acceleration, etc.
Typically such a bridge has an overlay and a VLAN interface. Via configuration, a logical switch (VNI) to VLAN mapping can be achieved therefore enabling virtual machines and physical workloads to share the same subnet.
NSX overlay to VLAN bridges exist in different form factors:
X86-based:
NSX-V: native capability, leverages ESXi for this
NSX-MH: native capability, leverages L2 gateway(s) for this.
VXLAN capable switch (aka HW VTEP):
NSX-V: currently only possible with multicast based VXLAN (physical switches would have to have IP multicast enabled). Internal validation with Arista 7150 and Cisco Nexus 9300 is complete.
NSX-MH: leverages OVSDB as communication protocol between controller and HW VTEP in order to program and learn from the HW VTEP
Current partners on the OVSDB front:
Arista
Brocade
Cumulus
Dell
HP
IBM
Juniper
MX and QFX: Inter-VXLAN routing and Intra-VXLAN Bridging with (hardware) VTEP and OVSDB protocol to NSX
Network Director: VM and server visibility and mapping with vCenter and VXLAN visibility with NSX integrations
vSRX (and Security Director): L4-L7 FW on vSphere/ESXi (and Security Director to vCenter to lifecycle manage vSRX VMs)