SlideShare une entreprise Scribd logo
1  sur  48
Contrail 3.0.2 cloud solution with Nested KVM
Guests
Sethuraman Ramanathan
Agenda
• Data Center Orchestration
• Overlay Networks
• MPLSOVERGRE/MPLSOVERUDP Use case
• Nested Virtualization
• How to create Nested Virtualization
• Tracing route between Nested guest and Physical host
• Gateway Configuration
Data Center Orchestration
 Orchestration
• Compute
• Deliver the VM
• Network
• Connect the VM
to the network
• Storage
• Connect the VM
to storage
VM
VM
Server
VM
VM
Server
Orchestrator -Openstack
SDN Controller
Gateway
Internet VPN
DCI WAN
Service Nodes
Compute Network Storage
Underlay
Network
• OpenStack is a set of software tools for building and managing
cloud computing platforms for public and private clouds.
• OpenStack is considered to provide Infrastructure as a Service
(IaaS). Providing infrastructure means that OpenStack makes
it easy for users to quickly add new instance, upon which other
cloud components can run.
Openstack
• Contrail Controller
– Control plane
– Logically centralized, physically distributed
– Management, control, and analytics
– Manages the vRouters
• Contrail vRouter
– Forwarding plane
– Extends physical network to virtual overlay network
• Provides Layer 2 and Layer 3 services
OpenContrail
• Overlay networking
– Physical—underlay network
• Routers and switches
• Provides IP connectivity
• Uniform low-latency, non-blocking, high-bandwidth connectivity
• No per-tenant state
– Virtual—overlay network
• vRouters create overlay network on top of the underlay network
• MPLS over GRE tunnels
• MPLS over UDP tunnels
• VXLAN tunnels
Overlay Networking
MPLSOVERGRE/
MPLSOVERUDP Overlays use case with
MX Gateway
Use case Topology
BGP
Controllers
MPLSOVERGRE AND MPLSOVERUDP FRAME FORMAT
USE CASE DESCRIPTION
• 2 CLUSTERS CLUSTER-1 and CLUSTER-2.
• Each cluster has 2000 physical servers(compute
nodes).
• Total 4000 compute nodes.
• Each compute node has one vRouter.vRouters in
one cluster would set up the same type of tunnels to
the gateways.
• The requirement is to have MPLSOVERGRE from
one cluster and MPLSOVERUDP from another
cluster.
[edit]
# run show dynamic-tunnels database terse
Table: inet.3
Destination-network: 30.30.0.0/16
Destination Source Next-hop Type Status
30.30.0.16/32 10.255.181.172 0x48765a4 nhid 657 udp Up
30.30.0.8/32 10.255.181.172 0x487afdc nhid 658 udp Up
<….>
Destination-network: 40.40.0.0/16
Destination Source Next-hop Type Status
40.40.0.3/32 10.255.181.172 0x487b404 nhid 679 gre Up
40.40.0.3/32 10.255.181.172 0x487a65c nhid 678 gre Up
<..>
[edit]
MPLSOVERUDP/MPLSOVERGRE Overlay status
Gateway to contrail bgp peer status
> show bgp summary
Groups: 1 Peers: 3 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
bgp.l3vpn.0
52 26 0 0 0 0
bgp.l3vpn.2
0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
2.2.2.2 64512 1859 1278 0 1 4:05:34 Establ
bgp.l3vpn.0: 17/17/17/0
vrf1.inet.0: 10/10/10/0
vrf2.inet.0: 7/7/7/0
2.2.2.3 64512 1891 1276 0 1 4:05:22 Establ
bgp.l3vpn.0: 9/17/17/0
vrf1.inet.0: 4/9/9/0
vrf2.inet.0: 5/8/8/0
2.2.2.4 64512 2009 1278 0 1 4:05:37 Establ
bgp.l3vpn.0: 0/18/18/0
vrf1.inet.0: 0/9/9/0
vrf2.inet.0: 0/9/9/0
MPLSOVERGRE Virtual machine Routes
vrf2.inet.0: 38 destinations, 51 routes (38 active, 0 holddown, 0 hidden)
100.100.2.3/32 (2 entries, 1 announced)
*BGP Preference: 170/-201
<….>
Source: 2.2.2.3
Next hop type: Tunnel Composite, Next hop index: 680
Next hop: , selected
Label operation: Push 17
Label TTL action: prop-ttl
Load balance label: Label 17: None;
<..>
Protocol next hop: 40.40.0.3
Label operation: Push 17
<….>
> show route 100.100.2.3/32 table vrf2.inet.0 detail
vrf2.inet.0: 38 destinations, 51 routes (38 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
MPLSOVERGRE Virtual machine Routes
> show route 40.40.0.3 table inet.3 extensive
inet.3: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
40.40.0.3/32 (1 entry, 1 announced)
*Tunnel Preference: 300
Next hop type: Tunnel Composite, Next hop index: 0
Address: 0x48749bc
Next-hop reference count: 2
Tunnel type: GRE, Reference count: 4, nhid: 0
Destination address: 40.40.0.3, Source address: 10.255.181.172
Tunnel id: 268435470
State: <Active>
Local AS: 64512
Age: 10:22
Validation State: unverified
Task: RT
Announcement bits (2): 0-Resolve tree 1 2-Resolve_IGP_FRR
task
AS path: I
>
MPLSOVERUDP Virtual machine Routes
> show route 100.100.1.9/32 table vrf1.inet.0 detail
vrf1.inet.0: 40 destinations, 55 routes (40 active, 0 holddown, 0 hidden)
100.100.1.9/32 (2 entries, 1 announced)
*BGP Preference: 170/-201
Route Distinguisher: 30.30.0.9:1
Next hop type: Indirect, Next hop index: 0
Address: 0x487ac4c
Next-hop reference count: 5
Source: 2.2.2.3
Next hop type: Tunnel Composite, Next hop index: 678
Next hop: , selected
Label operation: Push 16
<…>
Protocol next hop: 30.30.0.9
Label operation: Push 16
<….>
Task: BGP_64512.2.2.2.3
Announcement bits (1): 0-KRT
AS path: ?
MPLSOVERUDP Virtual machine Routes
> show route 30.30.0.9 table inet.3 extensive
inet.3: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden)
30.30.0.9/32 (1 entry, 1 announced)
*Tunnel Preference: 300
Next hop type: Tunnel Composite, Next hop index: 0
Address: 0x487a52c
Next-hop reference count: 2
Tunnel type: UDP, Reference count: 3, nhid: 0
Destination address: 30.30.0.9, Source address: 10.255.181.172
Tunnel id: 1610612758
State: <Active>
Local AS: 64512
Age: 8:45
Validation State: unverified
Task: RT
Announcement bits (2): 0-Resolve tree 1 2-Resolve_IGP_FRR task
AS path: I
>
-
Nested Virtualization
Use cases
• Testing/development Environment:
Your company is building a new solution and you need a virtual IT lab to rapidly
create and provision environments for build verification, test automation and/or
manual testing. You can deploy Nested ESXi /kvm as a test/development platform
without spending money on hardware.In our case we need to scale and test
MPLSOVERUDP/MPLSOVERGRE Tunnels with contrail.
• Nested virtualization in Public cloud:
Enterprises can run hypervisors like KVM on AWS with low performance impact.
https://www.ravellosystems.com/blog/run-nested-kvm-on-aws-google/#more-6289
• Virtual Education and Training:
virtual IT lab can be created for Training Purposes.
WAN/Core
Overlay Tunnels
Gateway
Router
Non Nested KVM Hypervisor Solution
Overlay Tunnel Scale vs compute nodes
L3 Fabric
WAN/Core
Controller
OVERLAY TUNNEL Virtual Machine
• To test Tunnel scale,
– We need to increase the compute node scale.
– In one of the use case customer uses 4k servers.It
is not practically possible to have 4k physical
servers in the lab to test 4k Tunnels
Testing Challenges.,
Nested KVM Hypervisor Solution
Centos VM1
KVM Guest
hypervisor with
vRouter
Physical Server
Centos
KVM Host Hypervisor
Guest User Space
VM
Cirros OS
Bridge br0
Centos VM2
KVM Guest
Hypervisor with
vRouter
Guest User Space
VM
Cirros OS
Centos VM3
KVM Guest
Hypervisor with
vRouter
Guest User Space
VM
Cirros OS
eth1
eth0
Layer 3 Underlay Network
Contrail Controller
Orchestrator
Management li
WAN/Core
Gateway
Router
Overlay Tunnels
• We can use Nested KVM Hypervisor Guests.
– In One Physical server we can create Multiple
nested Hypervisor Guests.
– Each nested Hypervisor Guests takes 4G RAM,25G
Harddisk and 2 CPU.
Nested KVM Hypervisor Solution
Nested Hypervisors as Compute hosts in
Openstack
How to create Nested Virtualization
• First,on a Baremetal server we need to install
Centos.This server should have minimum 2
interfaces(eth0 and eth1).
• Install KVM hypervisor (host hypervisor) on top of
Centos.
• Create Bridge br0 on centos.Connect eth1 port to
this bridge.This bridge will connect all the nested
VMs.
• Create guest VM on the KVM host
Hypervisor.Connect this Guest VM to the bridge br0
using vnet interface.
Steps to create Nested KVM hypervisors
• After this install KVM (guest hypervisor) on the Guest
VM.
• Now compute host is ready for use.
• Add this host as a compute host in contrail
controller/Openstack controller.
Steps to create Nested KVM hypervisors
BRIDGE Br0 in Linux host
~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.54ab3a243c78 yes ens20f1
vnet0
vnet1
vnet2
vnet3
vnet4
vnet5
vnet6
vnet7
vnet8
vnet9
<..>
~]#
• vnet interface connects bridge br0 to the nested VM s
• Interface ens20f1 is the UPLINK for the bridge br0.
Status of VMs
[ SERVER2~]# virsh list
Id Name State
----------------------------------------------------
14 vmg10 running
15 vmg11 running
16 vmg12 running
17 vmg13 running
18 vmg4 running
19 vmg5 running
20 vmg6 running
21 vmg7 running
22 vmg8 running
23 vmg9 running
[~]#
• VM status
How to connect to nested VM
[ SERVER2~]# ssh root@50.50.0.8
root@50.50.0.8's password:
Last login: Wed Aug 3 20:01:28 2016 from static-50-50-0-2.snpr.wi.frontiernet.net
[root@vmg8 ~]#
• Ssh to nested VM from physical host
vRouter and nova-compute status on the
Nested Guest
[root@vmg8 ~]# contrail-status
== Contrail vRouter ==
supervisor-vrouter: active
contrail-vrouter-agent active
contrail-vrouter-nodemgr active
[root@vmg8 ~]#
[root@vmg8 ~]# openstack-status
== Nova services ==
openstack-nova-api: inactive (disabled on boot)
openstack-nova-compute: active
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: inactive (disabled on boot)
== Support services ==
dbus: active
Warning novarc not sourced
[root@vmg8 ~]#
Route tracing/packet forwarding between
Nested to Physical host
Lab Setup
L3 Fabric
WAN/Core
Controllers
OVERLAY TUNNEL
BGP
xmpp
Virtual Machine
TOR
• VMA sends packet to VMB.
• Packet is encapsulated with MPLSOVERGRE
header by the vRouter in nested guest vmg8.
• Packet is sent to underlay.Underlay routes the
packet to physical host sdn-server14.
• Sdn-server14 vRouter decaps the packet and
removes MPLSOVERGRE header.
• Packet is sent to VMB.
Route tracing/packet forwarding between
Nested to Physical host
Route Distribution
Vmg8 (nested VM)
VRF (Dynamic Tunnel Encapsulation)
SERVER2
VRF(Dynamic Tunnel Encapsulation)
IP Network
Control Node
50.50.0.8 40.40.0.3
Control Plane (XMPP) Control Plane (XMPP)
vRouter
Agent
vRouter
Agent
100.100.2.19: NH = 50.50.0.8; LBL = 16
100.100.2.19 : NH = 50.50.0.8; LBL = 16
100.100.2.19: NH = 50.50.0.8; LBL = 16
VM-A
100.100.2.19
VM-B
100.100.2.3
100.100.2.3 : NH = 40.40.0.3; LBL = 16
100.100.2.3: NH = 40.40.0.3; LBL = 16
100.100.2.3: NH = 40.40.0.3; LBL = 16
100.100.2.3 100.100.2.19 PAYLOAD
PriDstIP PriSrcIP
40.40.0.3 50.50.0.8 GRE LBL=16 100.100.2.3 100.100.2.19 PAYLOAD
PubDstIP PubSrcIP PriDstIP PriSrcIP
100.100.2.3 100.100.2.19 PAYLOAD
PriDstIP PriSrcIP
Outer MAC headers left out
to reduce clutter
Route tracing in Nested VM
Route tracing in Nested VM
vif0/3 OS: tap522faa85-ac
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
Vrf:1 Flags:PL3L2D MTU:9160 Ref:6
RX packets:5370562 bytes:4148715885 errors:0
TX packets:5476498 bytes:4306144288 errors:0
[[root@vmg8 ~]# vif --list
Vrouter Interface Table
• This tap interface on machine vmg8 connects to the VM instance.
• VRF associated with this VM is 1.
Route tracing in Nested VM
[[root@vmg8 ~]# rt --dump 1|grep 100.100.2.3
Destination PPL Flags Label Nexthop Stitched MAC(Index)
100.100.2.3/32 32 LP 16 34 2:5f:4f:a9:8c:f3(251320)<..>
[root@vmg8 ~]# nh --get 34
Id:34 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:3 Vrf:0
Flags:Valid, MPLSoUDP,
Oif:0 Len:14 Flags Valid, MPLSoUDP, Data:02 00 08 00 01 93 52 54 00 8a a8 ac 08 00
Vrf:0 Sip:50.50.0.8 Dip:40.40.0.3
[root@vmg8 ~]#
• Label 16 is mapped to the route 100.100.2.3/32
• The next-hop id for this route is 34
• The next-hop id is mapped to 40.40.0.3.This is mapped to physical host sdn-server14.
• The Oif interface shows which interface the packets will be sent out.
Route tracing in Nested VM
[root@vmg8 ~]# vif --get 0
Vrouter Interface Table
Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload,
Mon=Interface is Monitored
Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload
vif0/0 OS: eth0
Type:Physical HWaddr:52:54:00:8a:a8:ac IPaddr:0
Vrf:0 Flags:TcL3L2Vp MTU:1514 Ref:25
RX packets:6645022 bytes:4577745689 errors:0
TX packets:6050943 bytes:6037550179 errors:0
[root@vmg8 ~]#
• The packet will be sent on the interface eth0 of nested VM vmg8.
Route tracing Physical compute host
vif0/3 OS: tap5f4fa98c-f3
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0
Vrf:1 Flags:PL3L2D MTU:9160 Ref:6
RX packets:352295 bytes:271632089 errors:0
TX packets:805538 bytes:944795106 errors:0
[SERVER2 ~]# vif --list
Vrouter Interface Table
• This tap interface on machine sdn-server14 connects to the VM instance
• VRF associated with this VM is 1.
Route tracing Physical compute host
server2# rt --dump 1|grep 100.100.2.19
Destination PPL Flags Label Nexthop Stitched MAC(Index)
100.100.2.19/32 32 LP 16 18 2:52:2f:aa:85:ac(101680)
<..>
[ server2~]# nh --get 18
Id:18 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:3 Vrf:0
Flags:Valid, MPLSoUDP,
Oif:0 Len:14 Flags Valid, MPLSoUDP, Data:02 00 08 00 00 40 f4 e9 d4 92 2f a0 08 00
Vrf:0 Sip:40.40.0.3 Dip:50.50.0.8
[server2~]#
• Label 16 is mapped to the route 100.100.2.19/32
• The next-hop id for this route is 18
• The next-hop id is mapped to 50.50.0.8.This is mapped to nested VM vmg8.
• The Oif interface shows which interface the packets will be sent out.
Route tracing Physical compute host
[server2 ~]# vif --get 0
Vrouter Interface Table
Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload,
Mon=Interface is Monitored
Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload
vif0/0 OS: p2p1 (Speed 10000, Duplex 1)
Type:Physical HWaddr:f4:e9:d4:92:2f:a0 IPaddr:0
Vrf:0 Flags:L3L2Vp MTU:1514 Ref:15
RX packets:4032675 bytes:2038271751 errors:0
TX packets:3376992 bytes:1348939594 errors:0
[server2 ~]#
• The packet will be sent on the interface p2p1 of server sdn-server14.
[edit]
# show routing-options
autonomous-system 64512;
dynamic-tunnels {
gre next-hop-based-tunnel;
contrail {
source-address 10.255.181.172;
udp;
destination-networks {
30.30.0.0/16;
}
}
contrail-gre {
source-address 10.255.181.172;
gre;
destination-networks {
40.40.0.0/16;
50.50.0.0/16;
}
}
}
MX Gateway – Dynamic tunnels
configuration
MX Gateway- Bgp peering with controllers
edit]
# show protocols bgp
group contrail {
type internal;
traceoptions {
file bgp-log size 100m;
flag all;
}
local-address 10.255.181.172;
family inet-vpn {
any;
}
neighbor 2.2.2.2;
neighbor 2.2.2.3;
neighbor 2.2.2.4;
}
[edit]
#
J
MX Gateway – vrf configuration
[edit]
# show routing-instances
vrf1 {
instance-type vrf;
interface lo0.1;
route-distinguisher 64512:1;
vrf-import test1;
vrf-export test1-export;
inactive: vrf-target target:64512:1;
vrf-table-label;
}
vrf2 {
instance-type vrf;
interface lo0.2;
route-distinguisher 64512:2;
vrf-import test2;
vrf-export test2-export;
vrf-table-label;
}
[edit]
MX Gateway – communities used for
MPLSOVERUDP overlay[edit]
# show policy-options policy-statement test1
term 1 {
from community test;
then accept;
}
[edit]
# show policy-options policy-statement test1-export
term 1 {
from protocol [ direct ospf ];
then {
community add test;
community add encap-udp;
accept;
}
}
[edit]
# show policy-options community encap-udp
members 0x030c:64512:13;
[edit]
# show policy-options community test
members target:64512:1;
[edit]
#
[edit]
# show policy-options policy-statement test2-
export
term 1 {
from protocol [ direct ospf ];
then {
community add encap-gre;
community add test2;
accept;
}
}
[edit]
# show policy-options policy-statement test2
term 1 {
from community test2;
then accept;
}
MX Gateway – communities used for
MPLSOVERGRE overlay
]
# show policy-options
community test2-export
[edit]
# show policy-options community encap-gre
members 0x030c:64512:11;
[edit]
# show policy-options community test2
members target:64512:2;
[edit]
MX Gateway – communities used for
MPLSOVERGRE overlay

Contenu connexe

Tendances

OpenStack 2012 fall summit observation - Quantum/SDN
OpenStack 2012 fall summit observation - Quantum/SDNOpenStack 2012 fall summit observation - Quantum/SDN
OpenStack 2012 fall summit observation - Quantum/SDN
Te-Yen Liu
 

Tendances (20)

DPACC Acceleration Progress and Demonstration
DPACC Acceleration Progress and DemonstrationDPACC Acceleration Progress and Demonstration
DPACC Acceleration Progress and Demonstration
 
Networking, QoS, Liberty, Mitaka and Newton - Livnat Peer - OpenStack Day Isr...
Networking, QoS, Liberty, Mitaka and Newton - Livnat Peer - OpenStack Day Isr...Networking, QoS, Liberty, Mitaka and Newton - Livnat Peer - OpenStack Day Isr...
Networking, QoS, Liberty, Mitaka and Newton - Livnat Peer - OpenStack Day Isr...
 
Deployment of Juniper Contrail in AVG Technologies
Deployment of Juniper Contrail in AVG TechnologiesDeployment of Juniper Contrail in AVG Technologies
Deployment of Juniper Contrail in AVG Technologies
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
 
Ovs perf
Ovs perfOvs perf
Ovs perf
 
ONIC Japan 2016 - Contrail アップデート
ONIC Japan 2016 - Contrail アップデートONIC Japan 2016 - Contrail アップデート
ONIC Japan 2016 - Contrail アップデート
 
Geneve
GeneveGeneve
Geneve
 
Juniper Network Automation for KrDAG
Juniper Network Automation for KrDAGJuniper Network Automation for KrDAG
Juniper Network Automation for KrDAG
 
OpenStack DVR_What is DVR?
OpenStack DVR_What is DVR?OpenStack DVR_What is DVR?
OpenStack DVR_What is DVR?
 
Kubernetes OpenContrail Meetup
Kubernetes OpenContrail MeetupKubernetes OpenContrail Meetup
Kubernetes OpenContrail Meetup
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
 
Accelerate Service Function Chaining Vertical Solution with DPDK
Accelerate Service Function Chaining Vertical Solution with DPDKAccelerate Service Function Chaining Vertical Solution with DPDK
Accelerate Service Function Chaining Vertical Solution with DPDK
 
DragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronDragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutron
 
Virtual net performance
Virtual net performanceVirtual net performance
Virtual net performance
 
OpenStack 2012 fall summit observation - Quantum/SDN
OpenStack 2012 fall summit observation - Quantum/SDNOpenStack 2012 fall summit observation - Quantum/SDN
OpenStack 2012 fall summit observation - Quantum/SDN
 
Tectonic Summit 2016: Networking for Kubernetes
Tectonic Summit 2016: Networking for Kubernetes Tectonic Summit 2016: Networking for Kubernetes
Tectonic Summit 2016: Networking for Kubernetes
 
Commication Framework in OpenStack
Commication Framework in OpenStackCommication Framework in OpenStack
Commication Framework in OpenStack
 
VXLAN Integration with CloudStack Advanced Zone
VXLAN Integration with CloudStack Advanced ZoneVXLAN Integration with CloudStack Advanced Zone
VXLAN Integration with CloudStack Advanced Zone
 
OpenStack Quantum Intro (OS Meetup 3-26-12)
OpenStack Quantum Intro (OS Meetup 3-26-12)OpenStack Quantum Intro (OS Meetup 3-26-12)
OpenStack Quantum Intro (OS Meetup 3-26-12)
 
See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...
 

En vedette

En vedette (11)

macvlan and ipvlan
macvlan and ipvlanmacvlan and ipvlan
macvlan and ipvlan
 
Docker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan DriversDocker Networking with New Ipvlan and Macvlan Drivers
Docker Networking with New Ipvlan and Macvlan Drivers
 
Service Chaining - Cloud Network Services at Scale
Service Chaining - Cloud Network Services at ScaleService Chaining - Cloud Network Services at Scale
Service Chaining - Cloud Network Services at Scale
 
OpenStack as the Platform for Innovation
OpenStack as the Platform for InnovationOpenStack as the Platform for Innovation
OpenStack as the Platform for Innovation
 
Aplura virtualization slides
Aplura virtualization slidesAplura virtualization slides
Aplura virtualization slides
 
Qemu Introduction
Qemu IntroductionQemu Introduction
Qemu Introduction
 
Hypervisor Security - OpenStack Summit Hong Kong
Hypervisor Security - OpenStack Summit Hong KongHypervisor Security - OpenStack Summit Hong Kong
Hypervisor Security - OpenStack Summit Hong Kong
 
Implementation of reed solomon codes basics
Implementation of reed solomon codes basicsImplementation of reed solomon codes basics
Implementation of reed solomon codes basics
 
Kernel (computing)
Kernel (computing)Kernel (computing)
Kernel (computing)
 
Kernel (OS)
Kernel (OS)Kernel (OS)
Kernel (OS)
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
 

Similaire à nested-kvm

Similaire à nested-kvm (20)

Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015Network and Service Virtualization tutorial at ONUG Spring 2015
Network and Service Virtualization tutorial at ONUG Spring 2015
 
10 sdn-vir-6up
10 sdn-vir-6up10 sdn-vir-6up
10 sdn-vir-6up
 
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantImplementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
 
Osnug meetup-tungsten fabric - overview.pptx
Osnug meetup-tungsten fabric - overview.pptxOsnug meetup-tungsten fabric - overview.pptx
Osnug meetup-tungsten fabric - overview.pptx
 
Scaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptxScaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptx
 
Hardware accelerated switching with Linux @ SWLUG Talks May 2014
Hardware accelerated switching with Linux @ SWLUG Talks May 2014Hardware accelerated switching with Linux @ SWLUG Talks May 2014
Hardware accelerated switching with Linux @ SWLUG Talks May 2014
 
Tungsten Fabric Overview
Tungsten Fabric OverviewTungsten Fabric Overview
Tungsten Fabric Overview
 
Opencontrail network virtualization
Opencontrail network virtualizationOpencontrail network virtualization
Opencontrail network virtualization
 
2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration
2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration
2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration
 
AWS re:Invent 2016: NextGen Networking: New Capabilities for Amazon’s Virtual...
AWS re:Invent 2016: NextGen Networking: New Capabilities for Amazon’s Virtual...AWS re:Invent 2016: NextGen Networking: New Capabilities for Amazon’s Virtual...
AWS re:Invent 2016: NextGen Networking: New Capabilities for Amazon’s Virtual...
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsx
 
DevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network ArchitectDevOops - Lessons Learned from an OpenStack Network Architect
DevOops - Lessons Learned from an OpenStack Network Architect
 
Private cloud networking_cloudstack_days_austin
Private cloud networking_cloudstack_days_austinPrivate cloud networking_cloudstack_days_austin
Private cloud networking_cloudstack_days_austin
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture
 
Решения NFV в контексте операторов связи
Решения NFV в контексте операторов связиРешения NFV в контексте операторов связи
Решения NFV в контексте операторов связи
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRP
 
Network
NetworkNetwork
Network
 
Cloud Monitors Cloud
Cloud Monitors CloudCloud Monitors Cloud
Cloud Monitors Cloud
 

nested-kvm

  • 1. Contrail 3.0.2 cloud solution with Nested KVM Guests Sethuraman Ramanathan
  • 2. Agenda • Data Center Orchestration • Overlay Networks • MPLSOVERGRE/MPLSOVERUDP Use case • Nested Virtualization • How to create Nested Virtualization • Tracing route between Nested guest and Physical host • Gateway Configuration
  • 3. Data Center Orchestration  Orchestration • Compute • Deliver the VM • Network • Connect the VM to the network • Storage • Connect the VM to storage VM VM Server VM VM Server Orchestrator -Openstack SDN Controller Gateway Internet VPN DCI WAN Service Nodes Compute Network Storage Underlay Network
  • 4. • OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds. • OpenStack is considered to provide Infrastructure as a Service (IaaS). Providing infrastructure means that OpenStack makes it easy for users to quickly add new instance, upon which other cloud components can run. Openstack
  • 5. • Contrail Controller – Control plane – Logically centralized, physically distributed – Management, control, and analytics – Manages the vRouters • Contrail vRouter – Forwarding plane – Extends physical network to virtual overlay network • Provides Layer 2 and Layer 3 services OpenContrail
  • 6. • Overlay networking – Physical—underlay network • Routers and switches • Provides IP connectivity • Uniform low-latency, non-blocking, high-bandwidth connectivity • No per-tenant state – Virtual—overlay network • vRouters create overlay network on top of the underlay network • MPLS over GRE tunnels • MPLS over UDP tunnels • VXLAN tunnels Overlay Networking
  • 10. USE CASE DESCRIPTION • 2 CLUSTERS CLUSTER-1 and CLUSTER-2. • Each cluster has 2000 physical servers(compute nodes). • Total 4000 compute nodes. • Each compute node has one vRouter.vRouters in one cluster would set up the same type of tunnels to the gateways. • The requirement is to have MPLSOVERGRE from one cluster and MPLSOVERUDP from another cluster.
  • 11. [edit] # run show dynamic-tunnels database terse Table: inet.3 Destination-network: 30.30.0.0/16 Destination Source Next-hop Type Status 30.30.0.16/32 10.255.181.172 0x48765a4 nhid 657 udp Up 30.30.0.8/32 10.255.181.172 0x487afdc nhid 658 udp Up <….> Destination-network: 40.40.0.0/16 Destination Source Next-hop Type Status 40.40.0.3/32 10.255.181.172 0x487b404 nhid 679 gre Up 40.40.0.3/32 10.255.181.172 0x487a65c nhid 678 gre Up <..> [edit] MPLSOVERUDP/MPLSOVERGRE Overlay status
  • 12. Gateway to contrail bgp peer status > show bgp summary Groups: 1 Peers: 3 Down peers: 0 Table Tot Paths Act Paths Suppressed History Damp State Pending bgp.l3vpn.0 52 26 0 0 0 0 bgp.l3vpn.2 0 0 0 0 0 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped... 2.2.2.2 64512 1859 1278 0 1 4:05:34 Establ bgp.l3vpn.0: 17/17/17/0 vrf1.inet.0: 10/10/10/0 vrf2.inet.0: 7/7/7/0 2.2.2.3 64512 1891 1276 0 1 4:05:22 Establ bgp.l3vpn.0: 9/17/17/0 vrf1.inet.0: 4/9/9/0 vrf2.inet.0: 5/8/8/0 2.2.2.4 64512 2009 1278 0 1 4:05:37 Establ bgp.l3vpn.0: 0/18/18/0 vrf1.inet.0: 0/9/9/0 vrf2.inet.0: 0/9/9/0
  • 13. MPLSOVERGRE Virtual machine Routes vrf2.inet.0: 38 destinations, 51 routes (38 active, 0 holddown, 0 hidden) 100.100.2.3/32 (2 entries, 1 announced) *BGP Preference: 170/-201 <….> Source: 2.2.2.3 Next hop type: Tunnel Composite, Next hop index: 680 Next hop: , selected Label operation: Push 17 Label TTL action: prop-ttl Load balance label: Label 17: None; <..> Protocol next hop: 40.40.0.3 Label operation: Push 17 <….> > show route 100.100.2.3/32 table vrf2.inet.0 detail vrf2.inet.0: 38 destinations, 51 routes (38 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both
  • 14. MPLSOVERGRE Virtual machine Routes > show route 40.40.0.3 table inet.3 extensive inet.3: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden) 40.40.0.3/32 (1 entry, 1 announced) *Tunnel Preference: 300 Next hop type: Tunnel Composite, Next hop index: 0 Address: 0x48749bc Next-hop reference count: 2 Tunnel type: GRE, Reference count: 4, nhid: 0 Destination address: 40.40.0.3, Source address: 10.255.181.172 Tunnel id: 268435470 State: <Active> Local AS: 64512 Age: 10:22 Validation State: unverified Task: RT Announcement bits (2): 0-Resolve tree 1 2-Resolve_IGP_FRR task AS path: I >
  • 15. MPLSOVERUDP Virtual machine Routes > show route 100.100.1.9/32 table vrf1.inet.0 detail vrf1.inet.0: 40 destinations, 55 routes (40 active, 0 holddown, 0 hidden) 100.100.1.9/32 (2 entries, 1 announced) *BGP Preference: 170/-201 Route Distinguisher: 30.30.0.9:1 Next hop type: Indirect, Next hop index: 0 Address: 0x487ac4c Next-hop reference count: 5 Source: 2.2.2.3 Next hop type: Tunnel Composite, Next hop index: 678 Next hop: , selected Label operation: Push 16 <…> Protocol next hop: 30.30.0.9 Label operation: Push 16 <….> Task: BGP_64512.2.2.2.3 Announcement bits (1): 0-KRT AS path: ?
  • 16. MPLSOVERUDP Virtual machine Routes > show route 30.30.0.9 table inet.3 extensive inet.3: 28 destinations, 28 routes (28 active, 0 holddown, 0 hidden) 30.30.0.9/32 (1 entry, 1 announced) *Tunnel Preference: 300 Next hop type: Tunnel Composite, Next hop index: 0 Address: 0x487a52c Next-hop reference count: 2 Tunnel type: UDP, Reference count: 3, nhid: 0 Destination address: 30.30.0.9, Source address: 10.255.181.172 Tunnel id: 1610612758 State: <Active> Local AS: 64512 Age: 8:45 Validation State: unverified Task: RT Announcement bits (2): 0-Resolve tree 1 2-Resolve_IGP_FRR task AS path: I > -
  • 18. Use cases • Testing/development Environment: Your company is building a new solution and you need a virtual IT lab to rapidly create and provision environments for build verification, test automation and/or manual testing. You can deploy Nested ESXi /kvm as a test/development platform without spending money on hardware.In our case we need to scale and test MPLSOVERUDP/MPLSOVERGRE Tunnels with contrail. • Nested virtualization in Public cloud: Enterprises can run hypervisors like KVM on AWS with low performance impact. https://www.ravellosystems.com/blog/run-nested-kvm-on-aws-google/#more-6289 • Virtual Education and Training: virtual IT lab can be created for Training Purposes.
  • 20. Overlay Tunnel Scale vs compute nodes L3 Fabric WAN/Core Controller OVERLAY TUNNEL Virtual Machine
  • 21. • To test Tunnel scale, – We need to increase the compute node scale. – In one of the use case customer uses 4k servers.It is not practically possible to have 4k physical servers in the lab to test 4k Tunnels Testing Challenges.,
  • 22. Nested KVM Hypervisor Solution Centos VM1 KVM Guest hypervisor with vRouter Physical Server Centos KVM Host Hypervisor Guest User Space VM Cirros OS Bridge br0 Centos VM2 KVM Guest Hypervisor with vRouter Guest User Space VM Cirros OS Centos VM3 KVM Guest Hypervisor with vRouter Guest User Space VM Cirros OS eth1 eth0 Layer 3 Underlay Network Contrail Controller Orchestrator Management li WAN/Core Gateway Router Overlay Tunnels
  • 23. • We can use Nested KVM Hypervisor Guests. – In One Physical server we can create Multiple nested Hypervisor Guests. – Each nested Hypervisor Guests takes 4G RAM,25G Harddisk and 2 CPU. Nested KVM Hypervisor Solution
  • 24. Nested Hypervisors as Compute hosts in Openstack
  • 25. How to create Nested Virtualization
  • 26. • First,on a Baremetal server we need to install Centos.This server should have minimum 2 interfaces(eth0 and eth1). • Install KVM hypervisor (host hypervisor) on top of Centos. • Create Bridge br0 on centos.Connect eth1 port to this bridge.This bridge will connect all the nested VMs. • Create guest VM on the KVM host Hypervisor.Connect this Guest VM to the bridge br0 using vnet interface. Steps to create Nested KVM hypervisors
  • 27. • After this install KVM (guest hypervisor) on the Guest VM. • Now compute host is ready for use. • Add this host as a compute host in contrail controller/Openstack controller. Steps to create Nested KVM hypervisors
  • 28. BRIDGE Br0 in Linux host ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.54ab3a243c78 yes ens20f1 vnet0 vnet1 vnet2 vnet3 vnet4 vnet5 vnet6 vnet7 vnet8 vnet9 <..> ~]# • vnet interface connects bridge br0 to the nested VM s • Interface ens20f1 is the UPLINK for the bridge br0.
  • 29. Status of VMs [ SERVER2~]# virsh list Id Name State ---------------------------------------------------- 14 vmg10 running 15 vmg11 running 16 vmg12 running 17 vmg13 running 18 vmg4 running 19 vmg5 running 20 vmg6 running 21 vmg7 running 22 vmg8 running 23 vmg9 running [~]# • VM status
  • 30. How to connect to nested VM [ SERVER2~]# ssh root@50.50.0.8 root@50.50.0.8's password: Last login: Wed Aug 3 20:01:28 2016 from static-50-50-0-2.snpr.wi.frontiernet.net [root@vmg8 ~]# • Ssh to nested VM from physical host
  • 31. vRouter and nova-compute status on the Nested Guest [root@vmg8 ~]# contrail-status == Contrail vRouter == supervisor-vrouter: active contrail-vrouter-agent active contrail-vrouter-nodemgr active [root@vmg8 ~]# [root@vmg8 ~]# openstack-status == Nova services == openstack-nova-api: inactive (disabled on boot) openstack-nova-compute: active openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: inactive (disabled on boot) == Support services == dbus: active Warning novarc not sourced [root@vmg8 ~]#
  • 32. Route tracing/packet forwarding between Nested to Physical host
  • 33. Lab Setup L3 Fabric WAN/Core Controllers OVERLAY TUNNEL BGP xmpp Virtual Machine TOR
  • 34. • VMA sends packet to VMB. • Packet is encapsulated with MPLSOVERGRE header by the vRouter in nested guest vmg8. • Packet is sent to underlay.Underlay routes the packet to physical host sdn-server14. • Sdn-server14 vRouter decaps the packet and removes MPLSOVERGRE header. • Packet is sent to VMB. Route tracing/packet forwarding between Nested to Physical host
  • 35. Route Distribution Vmg8 (nested VM) VRF (Dynamic Tunnel Encapsulation) SERVER2 VRF(Dynamic Tunnel Encapsulation) IP Network Control Node 50.50.0.8 40.40.0.3 Control Plane (XMPP) Control Plane (XMPP) vRouter Agent vRouter Agent 100.100.2.19: NH = 50.50.0.8; LBL = 16 100.100.2.19 : NH = 50.50.0.8; LBL = 16 100.100.2.19: NH = 50.50.0.8; LBL = 16 VM-A 100.100.2.19 VM-B 100.100.2.3 100.100.2.3 : NH = 40.40.0.3; LBL = 16 100.100.2.3: NH = 40.40.0.3; LBL = 16 100.100.2.3: NH = 40.40.0.3; LBL = 16 100.100.2.3 100.100.2.19 PAYLOAD PriDstIP PriSrcIP 40.40.0.3 50.50.0.8 GRE LBL=16 100.100.2.3 100.100.2.19 PAYLOAD PubDstIP PubSrcIP PriDstIP PriSrcIP 100.100.2.3 100.100.2.19 PAYLOAD PriDstIP PriSrcIP Outer MAC headers left out to reduce clutter
  • 36. Route tracing in Nested VM
  • 37. Route tracing in Nested VM vif0/3 OS: tap522faa85-ac Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0 Vrf:1 Flags:PL3L2D MTU:9160 Ref:6 RX packets:5370562 bytes:4148715885 errors:0 TX packets:5476498 bytes:4306144288 errors:0 [[root@vmg8 ~]# vif --list Vrouter Interface Table • This tap interface on machine vmg8 connects to the VM instance. • VRF associated with this VM is 1.
  • 38. Route tracing in Nested VM [[root@vmg8 ~]# rt --dump 1|grep 100.100.2.3 Destination PPL Flags Label Nexthop Stitched MAC(Index) 100.100.2.3/32 32 LP 16 34 2:5f:4f:a9:8c:f3(251320)<..> [root@vmg8 ~]# nh --get 34 Id:34 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:3 Vrf:0 Flags:Valid, MPLSoUDP, Oif:0 Len:14 Flags Valid, MPLSoUDP, Data:02 00 08 00 01 93 52 54 00 8a a8 ac 08 00 Vrf:0 Sip:50.50.0.8 Dip:40.40.0.3 [root@vmg8 ~]# • Label 16 is mapped to the route 100.100.2.3/32 • The next-hop id for this route is 34 • The next-hop id is mapped to 40.40.0.3.This is mapped to physical host sdn-server14. • The Oif interface shows which interface the packets will be sent out.
  • 39. Route tracing in Nested VM [root@vmg8 ~]# vif --get 0 Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload vif0/0 OS: eth0 Type:Physical HWaddr:52:54:00:8a:a8:ac IPaddr:0 Vrf:0 Flags:TcL3L2Vp MTU:1514 Ref:25 RX packets:6645022 bytes:4577745689 errors:0 TX packets:6050943 bytes:6037550179 errors:0 [root@vmg8 ~]# • The packet will be sent on the interface eth0 of nested VM vmg8.
  • 40. Route tracing Physical compute host vif0/3 OS: tap5f4fa98c-f3 Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0 Vrf:1 Flags:PL3L2D MTU:9160 Ref:6 RX packets:352295 bytes:271632089 errors:0 TX packets:805538 bytes:944795106 errors:0 [SERVER2 ~]# vif --list Vrouter Interface Table • This tap interface on machine sdn-server14 connects to the VM instance • VRF associated with this VM is 1.
  • 41. Route tracing Physical compute host server2# rt --dump 1|grep 100.100.2.19 Destination PPL Flags Label Nexthop Stitched MAC(Index) 100.100.2.19/32 32 LP 16 18 2:52:2f:aa:85:ac(101680) <..> [ server2~]# nh --get 18 Id:18 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:3 Vrf:0 Flags:Valid, MPLSoUDP, Oif:0 Len:14 Flags Valid, MPLSoUDP, Data:02 00 08 00 00 40 f4 e9 d4 92 2f a0 08 00 Vrf:0 Sip:40.40.0.3 Dip:50.50.0.8 [server2~]# • Label 16 is mapped to the route 100.100.2.19/32 • The next-hop id for this route is 18 • The next-hop id is mapped to 50.50.0.8.This is mapped to nested VM vmg8. • The Oif interface shows which interface the packets will be sent out.
  • 42. Route tracing Physical compute host [server2 ~]# vif --get 0 Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload vif0/0 OS: p2p1 (Speed 10000, Duplex 1) Type:Physical HWaddr:f4:e9:d4:92:2f:a0 IPaddr:0 Vrf:0 Flags:L3L2Vp MTU:1514 Ref:15 RX packets:4032675 bytes:2038271751 errors:0 TX packets:3376992 bytes:1348939594 errors:0 [server2 ~]# • The packet will be sent on the interface p2p1 of server sdn-server14.
  • 43. [edit] # show routing-options autonomous-system 64512; dynamic-tunnels { gre next-hop-based-tunnel; contrail { source-address 10.255.181.172; udp; destination-networks { 30.30.0.0/16; } } contrail-gre { source-address 10.255.181.172; gre; destination-networks { 40.40.0.0/16; 50.50.0.0/16; } } } MX Gateway – Dynamic tunnels configuration
  • 44. MX Gateway- Bgp peering with controllers edit] # show protocols bgp group contrail { type internal; traceoptions { file bgp-log size 100m; flag all; } local-address 10.255.181.172; family inet-vpn { any; } neighbor 2.2.2.2; neighbor 2.2.2.3; neighbor 2.2.2.4; } [edit] # J
  • 45. MX Gateway – vrf configuration [edit] # show routing-instances vrf1 { instance-type vrf; interface lo0.1; route-distinguisher 64512:1; vrf-import test1; vrf-export test1-export; inactive: vrf-target target:64512:1; vrf-table-label; } vrf2 { instance-type vrf; interface lo0.2; route-distinguisher 64512:2; vrf-import test2; vrf-export test2-export; vrf-table-label; } [edit]
  • 46. MX Gateway – communities used for MPLSOVERUDP overlay[edit] # show policy-options policy-statement test1 term 1 { from community test; then accept; } [edit] # show policy-options policy-statement test1-export term 1 { from protocol [ direct ospf ]; then { community add test; community add encap-udp; accept; } } [edit] # show policy-options community encap-udp members 0x030c:64512:13; [edit] # show policy-options community test members target:64512:1; [edit] #
  • 47. [edit] # show policy-options policy-statement test2- export term 1 { from protocol [ direct ospf ]; then { community add encap-gre; community add test2; accept; } } [edit] # show policy-options policy-statement test2 term 1 { from community test2; then accept; } MX Gateway – communities used for MPLSOVERGRE overlay
  • 48. ] # show policy-options community test2-export [edit] # show policy-options community encap-gre members 0x030c:64512:11; [edit] # show policy-options community test2 members target:64512:2; [edit] MX Gateway – communities used for MPLSOVERGRE overlay

Notes de l'éditeur

  1. This slide is animated.