3. Hardware Virtualization
• Guest Virtual Machines run on top of a
Host Machine
• Virtual machine acts like a real
computer with an operating system and
devices
• Virtual hardware – CPUs, Memory, I/O
• The software or firmware that creates a
virtual machine on the host hardware is
called a hypervisor
HYPERVISOR
4. Virtualization types
• Guest OS is not modified. Same OS is spun as a VM
• Guest OS is not aware of virtualization. Devices emulated entirely.
• Hypervisor need to trap and translate privileged instructions
Fully Virtualized
• Guest OS is aware that it is running in virtualized environment
• Guest OS and Hypervisor communicate through “hyper calls” for improved
performance and efficiency
• Guest OS uses a front-end driver for I/O operations
• Example : Juniper vRR, vMX
Para Virtualized
• Virtualization aware hardware (processors, NICs etc)
• Intel VT-x/VT-d/vmdq, AMD-V
• Example: Juniper VMX
Hardware
assisted
7. Virtual and Physical MX
PFE vPFE
Microcode
TRIO x86
CONTROL
PLANE
DATA PLANE
ASIC
PLATFORM
8. VMX Product
• Virtual JUNOS to be hosted on a VM
• Follows standard JUNOS release cycles
• Hosted on a VM, Bare Metal, Linux Containers
• Multi Core
• SR-IOV, virtIO, vmxnet3, …
VCP
(Virtualized Control Plane)
VFP
(Virtualized Forward Plane)
9. vMX Product Overview
VCPVFP
Physical NICs Management
traffic
Guest VM (Linux) Guest VM (FreeBSD)
Hypervisor: KVM, ESXi
Cores Memory
Bridge / vSwitch
Physical layer
PCI Pass through SR-IOV
VirtIO
Virtual Control Plane (VCP)
• JUNOS hosted in a VM. Offers all the capabilities
available in JUNOS
• Management remains the same as physical MX
• SMP capable
Virtual Forwarding Plane (VFP)
• Virtualized Trio software forwarding plane. Feature
parity with physical MX. Utilizes Intel DPDK libraries
• Multi-threaded SMP implementation allows for
elasticity
• SR-IOV capable for high throughput
• Can be hosted in VM or bare-metal
Orchestration
• vMX instance can be orchestrated through OpenStack
Kilo HEAT templates
• Package comes with scripts to launch vMX instance
11. CENTER CHIP (MQ, XM,..)
VMX Forwarding Model
LOOKUP CHIP (LU, XL…)
Queuing Chip
(QX, XQ,..)
FORWARDING WITH
TRIO ASICS on MX
DPDK
RIOT
DPDK
FORWARDING WITH
x86 on VMX
16. VMX QoS
LEVEL-1 LEVEL-
2
LEVEL-
3
PORT
S
I
X
Q
U
E
U
E
S
Q0
Q1
Q2
Q3
Q4
Q5
VLAN 1
VLAN 2
VLAN n
High
Medium
Low
§ Port:
§ Shaping-rate
§ VLAN:
§ Shaping-rate
§ 4k per IFD
§ Queues:
§ 6 queues
§ 3 priorities
§ 1 High
§ 1 medium
§ 4 low
§ Priority groups scheduling follows strict
priority for a given VLAN
§ Queues of the same priority for a given
VLAN use WRR
§ High and medium queues are capped at
transmit-rate
18. Revisit: X86 Server Architecture
CPU Socket 0 CPU Socket 1
Memory Memory
Memory Controller Memory Controller
PCI Controller
PCI Controller
NICs
NICs
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
19. vMX Environment
Description Value
Sample
system
configuration Intel
Xeon
E5-‐2667
v2
@
3.30GHz
25
MB
Cache. NIC:
Intel
82599
(for
SR-‐IOV
only)
Memory Minimum:
8
GB
(2GB
for
vRE,
4GB
for
vPFE,
2GB
for
Host
OS)
Storage Local
or
NAS
Sample system configuration
Sample configuration for number of CPUs
Use-‐cases Requirement
VMX
for up
to
100Mbps
performance
Min
#
of
vCPUs:
4
[1
vCPU for
VCP
and
3
vCPUs for
VFP]. Min
#
of
Cores:
2
[
1
core
for
VFP
and
1
core
for
VCP].
Min
memory
8G.
VirtIO NIC
only.
VMX
for
up
3G
of
performance
Min
#
of
vCPUs:
4
[1
vCPU for
VCP
and
3
vCPUs for
VFP]. Min
#
of
Cores:
4
[
3
cores
for
VFP,
1
core
for
VCP].
Min
memory
8G.
VirtIO or
SR-‐IOV
NIC.
VMX
for
3G
and
beyond
(assuming
min
2
ports
of
10G)
Min
#
of
vCPUs:
5
[1
vCPU for
VCP
and
4
vCPUs for
VFP]. Min
#
of
Cores:
5
[
4
cores
for
VFP,
1
core
for
VCP].
Min
memory
8G.
SR-‐IOV
only
NIC.
20. vMX Environment
Use-case 1: vMX instance up to 100Mbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 2 [1 core for VCP. 1 core for VFP]
Min memory 8G.
NIC: VirtIO is sufficient
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O – TX
& RX
VCPU 2
Worker
Host OS
CPU Socket
Use-case 2: vMX instance up to 3Gbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 4 [ 1 core for VCP. For VFP assume 2 port
1G/10G with a dedicated I/O core, 1 core for each Worker, 1
core for Host Interface ]
Min memory 8G.
NIC: VirtIO is sufficient. SR-IOV can also be used.
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O port 1
TX & RX
VCPU 3
Worker
Host OS
CPU Socket
I/O port 2
TX & RX
VCPU 2VCPU 1
Host
Interface
VCPU 0
Host
Interface
21. vMX Environment
Use-case 3: >3Gbps of throughput per instance
Assume 2 port 10G for I/O
Min # of vCPUs: 5 [1 vCPU for VCP & 4 vCPUs for VFP]
Min # of Cores: 5 [ 1 core for VCP. For VFP assume 2 port 10G
each with a dedicated I/O core, 1 core for each Worker, 1 core
for Host Interface]
Min memory 8G.
NIC: SR-IOV must be used
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 2
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O port 1
TX & RX
VCPU 3
Worker 1
Host OS
CPU Socket
I/O port 2
TX & RX
VCPU 2VCPU 0
Host
Interface
VCPU 3
Worker 2
VCPU n
Worker n
25. vLNS for business or wholesale - retail
• Separate vLNS instance
available for each
• Business VPN
• Retail ISP
• vLNS sized precisely to
serve required PPP and
L2TP sessions
CPE
Aggregation
Access
Node
PPP
PPPoE
L2TP
tunnel
LAC/
vLAC
Wholesale ISP
AAA server
Retail ISP
AAA server
Internet
vLNS In
Data
Centre
vLNS
Peer
Port
PPE
Core
side
port
Customer
VPN
Retail
ISP
26. SERVICE PROVIDER VMX USE CASE –
VIRTUAL PE (VPE)
DC/CO
Gateway
vPE
Provider
MPLS
cloud
CPE
L2
PE
L3
PE
CPE
Peering
Internet
SMBCPE
Pseudowire
L3VPN
IPSEC/Overlay
technology
Branch
Office
Branch
Office
DC
Fabric
27. vBNG for BNG near CO
vBNG
Deployment
Model
SP
Core
vBNG
Internet
OLT/DSLAM
DSL
or
Fiber
CPE
OLT/DSLAM
DSL
or
Fiber
CPE
OLT/DSLAM
DSL
or
Fiber
CPE
Central
Office
With
Cloud
Infrastructure
L2
Switch L2
Switch
• Business case is strongest when vBNG
aggregates 12K or fewer subscribers
Ethernet
Ethernet
28. Parts of a cloud
§ CGWR
Cloud gateway router
Could be router, server, switch
§ Switches
Switch features and overlay
technology as needed
§ Servers
Includes cabling between
servers and ToRs, mapping of
virtual instances to ports, core
capacity and virtual machines
3
Leaf
SpineSpine
Leaf
Cloud Gateways
vLNS
IP address 1.1.1.1
Other VNFs
IP address 2.2.2.2
Server-1
KVM
ge1 ge2 ge3 ge4
Leaf/TOR
NIC1 NIC2
vLNS
IP 3.3.3.3
Server-2
KVM
ge1 ge2 ge3 ge4
Leaf/TOR
NIC1 NIC2
Other VNFs
IP 4.4.4.4
29. VMX
with
service
chaining
– potential
vCPE use
case
vMX as
vCPE vFirewall vNATBranch
Office
Switch
Provider
MPLS
cloud
DC/CO
GW
Branch
Office
Switch
Provider
MPLS
cloud
DC/CO
Fabric
+
Contrail
overlay
vPE
Branch
Office
Switch
CPE
like
functionality
in
the
cloud
L2
PE
L2
PE
PE
Internet