The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
08448380779 Call Girls In Civil Lines Women Seeking Men
Elephant & mice flows
1. XYZ Account 2018 Design
ExtremeCore10/25/40/50/100G
ExtremeEdgePoE
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2018 Design
10G Compute, Memory and Storage
Jeff Green
2018
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
10G / 40G
ER4 – Up to 40 Km SMF 4 λ / dir (802.3ba).
LR4 - Up to 10 Km SMF 4 λ / dir (802.3ba).
2 Km lower cost module (G.652 SMF - Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 (100G CLR4)
TX3
Optical
MUX
4 Channels
CDR + LDD
4 CWDM DFB
Lasers
S i ngl e
M o de
F i b er
4 PINs
+
4 Channels
CDR + LA
Optical
DeMUX
4 TIAs
4
TX2
TX1
T X 0
R X 3
RX2
RX1
RX0
LR4: Most common interface in
telecom by far with missing
need is for reaches between
100m and 10km.
OPEX Components of Converged Environment
Compute
Storage
Networking
X Y
Z
Make better use of existing
infrastructure
Before
More applications per machine =
less machines
cost avoidance
After
Pooled capacity
Underlay
VM VM VM
Routing/NAT
Firewall
Load Balancing
L2/L3 VPN
DHCP/DNS relay
Overall Architecture25G / 50G /100G
QSFP28 DACs (Passive Cables) QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Compute (POD N)
Leaf
Compute (POD N)
Leaf
Compute (POD 3)
Leaf
Compute (POD 3)
Leaf
Compute (POD 2)
Leaf
Compute (POD 2)
Leaf
XYZ Account Spine
Compute (POD 1)
Leaf
Compute (POD 1)
Leaf
ECMP
Compute (POD N)
Leaf
Compute (POD 3)
Leaf
Compute (POD 2)
Leaf
XYZ Account Spine
Compute (POD 1)
Leaf
ECMP
EBGP
EBGP
OVSdb
NSX Segment HPCNSX Segment HPC NSX Segment StudentNSX Segment Student NSX Segment AdminNSX Segment Admin
NSX
App
OS App
OS
NSX Segment HPC NSX Segment Student NSX Segment Admin
NSX
App
OS App
OS
VM
VM
VM
VM
VM
VM VM
VM
VM VM
VM
VM
ACTION: Start Traffic
Analysis
RESULT: Not a network
problem
ACTION: Start Traffic
Analysis
RESULT: Not a network
problem
Accelerating mean-time-to-
innocence through automation
For XYZ Account Network
support.
Workflow Composer
Services Layer for XYZ Account Network
Software. DCI VxLAN/MPLS Integrated
Routing and Bridging Architecture Internet
Peering Arch.
q Open Source - Stackstorm, PySwitch,
Ansible modules, Openstack plugins etc.
q Workflow Composer - Platform, Services,
Designer etc.
q Automation Suites - Network Essentials,
DC Fabrics, Xchange , Flow Optimizer**,
Simulation & Modelling
q Visibility Manager, Session Director - User
interface for the network packet
broker solution
Traffic
Manager
Virtual
Firewall
Session
Manager
Traffic
Manager
Virtual
Firewall
Session
Manager
XYZ Account Orchestration
ACTION: Start Traffic
Analysis
RESULT: Not a network
problem
Accelerating mean-time-to-
innocence through automation
For XYZ Account Network
support.
Workflow Composer
Services Layer for XYZ Account Network
Software. DCI VxLAN/MPLS Integrated
Routing and Bridging Architecture Internet
Peering Arch.
q Open Source - Stackstorm, PySwitch,
Ansible modules, Openstack plugins etc.
q Workflow Composer - Platform, Services,
Designer etc.
q Automation Suites - Network Essentials,
DC Fabrics, Xchange , Flow Optimizer**,
Simulation & Modelling
q Visibility Manager, Session Director - User
interface for the network packet
broker solution
Traffic
Manager
Virtual
Firewall
Session
Manager
XYZ Account Orchestration
PublicPublic
Clemson
HPC
Clemson
HPC
Underlay
Wireless user needs a
connection
XYZ Account
(Enterprise VRF)
L2 L3
Wireless user needs a
connection
XYZ Account
(Enterprise VRF)
L2 L3
Virtual Fabric – Extension (VF-E)
VF-Extension
extends XYZ Account L2 &
Routing Protocols, (No STP)
Uses VxLAN
encapsulation in
Hardware
VLAN to VF
mapping .1Q, to
XYZ Account
Services and
Transport VF
SpineLeaf
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
IntraDC
POD1 POD2DC Core
Tenant 1
Tenant 2
Tenant 3
Tenant 1
Tenant 2
Tenant 3
Virtual Fabric – Extension (VF-E)
VF-Extension
extends XYZ Account L2 &
Routing Protocols, (No STP)
Uses VxLAN
encapsulation in
Hardware
VLAN to VF
mapping .1Q, to
XYZ Account
Services and
Transport VF
SpineLeaf
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
IntraDC
POD1 POD2DC Core
Tenant 1
Tenant 2
Tenant 3
Tenant 1
Tenant 2
Tenant 3
XYZ Account HPC Solver
needs a connection
XYZ Account
(HPC VRF)
L2 L3
XYZ Account HPC Solver
needs a connection
XYZ Account
(HPC VRF)
L2 L3
Compute (POD N)
Leaf
Compute (POD 3)
Leaf
Compute (POD 2)
Leaf
XYZ Account Spine
Compute (POD 1)
Leaf
ECMP
EBGP
EBGP
OVSdb
NSX Segment HPC NSX Segment Student NSX Segment Admin
NSX
App
OS App
OS
VM
VM
VM
VM
VM
VM VM
VM
VM VM
VM
VM
ACTION: Start Traffic
Analysis
RESULT: Not a network
problem
Accelerating mean-time-to-
innocence through automation
For XYZ Account Network
support.
Workflow Composer
Services Layer for XYZ Account Network
Software. DCI VxLAN/MPLS Integrated
Routing and Bridging Architecture Internet
Peering Arch.
q Open Source - Stackstorm, PySwitch,
Ansible modules, Openstack plugins etc.
q Workflow Composer - Platform, Services,
Designer etc.
q Automation Suites - Network Essentials,
DC Fabrics, Xchange , Flow Optimizer**,
Simulation & Modelling
q Visibility Manager, Session Director - User
interface for the network packet
broker solution
Traffic
Manager
Virtual
Firewall
Session
Manager
XYZ Account Orchestration
Public
Clemson
HPC
Underlay
Wireless user needs a
connection
XYZ Account
(Enterprise VRF)
L2 L3
Virtual Fabric – Extension (VF-E)
VF-Extension
extends XYZ Account L2 &
Routing Protocols, (No STP)
Uses VxLAN
encapsulation in
Hardware
VLAN to VF
mapping .1Q, to
XYZ Account
Services and
Transport VF
SpineLeaf
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
IntraDC
POD1 POD2DC Core
Tenant 1
Tenant 2
Tenant 3
Tenant 1
Tenant 2
Tenant 3
XYZ Account HPC Solver
needs a connection
XYZ Account
(HPC VRF)
L2 L3
NSX Overlay - A NSX Overlay - B
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM VM
VM
VM
VM
VTEPNSX Segment HPCNSX Segment HPCNSX Segment StudentNSX Segment StudentNSX Segment AdminNSX Segment Admin NSX Segment HPCNSX Segment HPC NSX Segment StudentNSX Segment Student NSX Segment AdminNSX Segment Admin
VM VM VM
VM VM VM
BGP-EVPN (VxLAN) XYZ Account
Data Center Interconnect (DCI)
VTEPBGP-EVPN (MPLS)
BGP-EVPN(VxLAN)
NSX Overlay - A NSX Overlay - B
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM VM
VM
VM
VM
VTEPNSX Segment HPCNSX Segment StudentNSX Segment Admin NSX Segment HPC NSX Segment Student NSX Segment Admin
VM VM VM
VM VM VM
BGP-EVPN (VxLAN) XYZ Account
Data Center Interconnect (DCI)
VTEPBGP-EVPN (MPLS)
BGP-EVPN(VxLAN)
Summit
XYZ Account Building 1
Summit
XYZ Account Building 1
Summit
XYZ Account Building 2
Summit
XYZ Account Building 2 Summit
XYZ Account Building 3
Summit
XYZ Account Building 3
D
DWho, what,
when, Where
Control
D
DWho, what,
when, Where
Control
Summit
XYZ Account Building 1
Summit
XYZ Account Building 2 Summit
XYZ Account Building 3
D
DWho, what,
when, Where
Control
NAC Client
Enforcement Point
Isolation
Network
NAC Server
allow QuarantineRemediate
CheckSummit
Who?
Where?
When?
What?
How?
X440-G2 (L3 - Value 1G to 10G) PoE
Fiber
DC
Policy
• SummitStack-V (WITHOUT any additional license
required).
• Upgradeable 10GbE (PN 16542 or 16543).
• Policy built-in (simplicity with multi-auth).
X440-G2 (L3 - Value 1G to 10G) PoE
Fiber
DC
Policy
• SummitStack-V (WITHOUT any additional license
required).
• Upgradeable 10GbE (PN 16542 or 16543).
• Policy built-in (simplicity with multi-auth).
XYZ Account Wireless Edge
Summit
XYZ Account Building 1
Summit
XYZ Account Building 2 Summit
XYZ Account Building 3
D
DWho, what,
when, Where
Control
NAC Client
Enforcement Point
Isolation
Network
NAC Server
allow QuarantineRemediate
CheckSummit
Who?
Where?
When?
What?
How?
X440-G2 (L3 - Value 1G to 10G) PoE
Fiber
DC
Policy
• SummitStack-V (WITHOUT any additional license
required).
• Upgradeable 10GbE (PN 16542 or 16543).
• Policy built-in (simplicity with multi-auth).
XYZ Account Wireless Edge
Router
NSX Security Architecture Overview
Internet Intranet
/Extranet
Perimeter
Firewall
(Physical)
NSX Edge
Service
Gateway
D
F
W
D
F
W
D
F
W
Distributed FW - DFW
Virtual
Compute Clusters
Stateful Perimeter
Protection
Inter/Intra
VM
Protection
NSX Security Architecture Overview
Internet Intranet
/Extranet
Perimeter
Firewall
(Physical)
NSX Edge
Service
Gateway
D
F
W
D
F
W
D
F
W
Distributed FW - DFW
Virtual
Compute Clusters
Stateful Perimeter
Protection
Inter/Intra
VM
Protection
XYZ Account performance is fine to nearby
sites, but terrible across the country. Why?
There are three major factors that affect
TCP performance (there are others, but these
are the Big Three): Packet loss, latency (or
RTT -Round Trip Time) and buffer/window
size. All three are interrelated. UDP will not
get a full 10Gbps (or more) without some tuning
as well. The important factors are: use jumbo
frames: performance will be 4-5 times better
using 9K MTUs.
q packet size: best performance is MTU size
minus packet header size.
q socket buffer size: socket buffer to 4M
seems to help a lot in most cases
q core selection: UDP at 10G is typically CPU
limited, so its important to pick the right
core. This is particularly true on Sandy/Ivy
Bridge motherboards.
ClemsonResearchNetwork
Internet/I2/NLR
PerfSonar PerfSonar
Collaborator
PerfSonar
CLight
ScienceDMZ
Perimeter F/W I2InnovationPlatform
Internet
F/W (ACL) andRouteFilter PerfSonar
Peer Link DMZ
Campus
PerfSonar
Clemson
Innovation
Platform
Palme>oNet
Host Firewall
Brocade
MLx32Core
Router
CC7NIE
Fibre Channel
Dell Z9000
SamQFS
Top of Rack Dell S4810
Dell S4810
Dell S4810
Organizing Compute
XYZ Account performance is fine to nearby
sites, but terrible across the country. Why?
There are three major factors that affect
TCP performance (there are others, but these
are the Big Three): Packet loss, latency (or
RTT -Round Trip Time) and buffer/window
size. All three are interrelated. UDP will not
get a full 10Gbps (or more) without some tuning
as well. The important factors are: use jumbo
frames: performance will be 4-5 times better
using 9K MTUs.
q packet size: best performance is MTU size
minus packet header size.
q socket buffer size: socket buffer to 4M
seems to help a lot in most cases
q core selection: UDP at 10G is typically CPU
limited, so its important to pick the right
core. This is particularly true on Sandy/Ivy
Bridge motherboards.
ClemsonResearchNetwork
Internet/I2/NLR
PerfSonar PerfSonar
Collaborator
PerfSonar
CLight
ScienceDMZ
Perimeter F/W I2InnovationPlatform
Internet
F/W (ACL) andRouteFilter PerfSonar
Peer Link DMZ
Campus
PerfSonar
Clemson
Innovation
Platform
Palme>oNet
Host Firewall
Brocade
MLx32Core
Router
CC7NIE
Fibre Channel
Dell Z9000
SamQFS
Top of Rack Dell S4810
Dell S4810
Dell S4810
Organizing ComputeSimple Control Plane
This is where NSX will provide XYZ
Account one control plane to
distribute network information to
ESXi hosts. NSX Controllers are
clustered for scale out and high
availability.
• Network information is
distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency
on multicast routing/PIM in the
physical network
• Provide suppression of ARP
broadcast traffic in VXLAN
networks
Switch Port
IP: 1.1.1.2
MAC: 00:0A
QoS: QP7
ACL: Deny
HTTP
VXLAN MAC Learning
ARP Req
VTEP 1
1.1.1.1
VTEP 2
2.2.2.2
VTEP 3
3.3.3.3
ARP Resp MAC IP Addr
VM 2 VTEP 2
VTEP 2 VTEP 1ARP Resp
NSX Control Plane
VXLAN
VTEP
VXLAN
VTEP
VXLAN
VTEP
Simple Control Plane
This is where NSX will provide XYZ
Account one control plane to
distribute network information to
ESXi hosts. NSX Controllers are
clustered for scale out and high
availability.
• Network information is
distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency
on multicast routing/PIM in the
physical network
• Provide suppression of ARP
broadcast traffic in VXLAN
networks
Switch Port
IP: 1.1.1.2
MAC: 00:0A
QoS: QP7
ACL: Deny
HTTP
VXLAN MAC Learning
ARP Req
VTEP 1
1.1.1.1
VTEP 2
2.2.2.2
VTEP 3
3.3.3.3
ARP Resp MAC IP Addr
VM 2 VTEP 2
VTEP 2 VTEP 1ARP Resp
NSX Control Plane
VXLAN
VTEP
VXLAN
VTEP
VXLAN
VTEP
Elephant & Mice flows
XYZ Account needs to avoid Collisions in two
ways: Upwards and Downwards. Some devices
may have shared memory infrastructure, and
not be capable of per-interface tuning (Virtual
Output Port - SLX 9850 - 4 or 6 GB per 6-port
group VoQ/ SLX 9850 - 6 GB VoQ).
All flows are short until they become long.
Larger Buffers (Elephants) need an increase
the headroom or more buffers but that can
increase queuing latency (Mice). We want the
best of both worlds where XYZ Account can
have the headroom for large flows (Elephants)
with no increase in latency for burst
performance (Mice).
Elephants Waste Buffer
Elephants buildup large queues - little or no buffer left for mice
Because TCP eats up all available buffer (until packet drop) as a
result, XYZ Account applications performance suffers.
Buffer
Headroom
For Burst
Express
Lane
Elephant & Mice flows
XYZ Account needs to avoid Collisions in two
ways: Upwards and Downwards. Some devices
may have shared memory infrastructure, and
not be capable of per-interface tuning (Virtual
Output Port - SLX 9850 - 4 or 6 GB per 6-port
group VoQ/ SLX 9850 - 6 GB VoQ).
All flows are short until they become long.
Larger Buffers (Elephants) need an increase
the headroom or more buffers but that can
increase queuing latency (Mice). We want the
best of both worlds where XYZ Account can
have the headroom for large flows (Elephants)
with no increase in latency for burst
performance (Mice).
Elephants Waste Buffer
Elephants buildup large queues - little or no buffer left for mice
Because TCP eats up all available buffer (until packet drop) as a
result, XYZ Account applications performance suffers.
Buffer
Headroom
For Burst
Express
Lane
Integration into Vmware ecosystem
• Comprehensive
operational data for
SDDC analytics
• Physical network visibility
for app and VM
performance
• Predefined alerts and
recommended actions
• Simplifies root-cause
analysis
vRealize Operations/
Log Insight
vCenter
• Physical network
access
for virtual workloads
• High-performance
bridging across
data center
fabrics
• Simple operations
with a single point of
integration
NSX Gateway
• VM-to-LUN storage
visibility and
monitoring
• Performance visibility
and forwarding
(including events)
• Empowers VI admins to
help identify
bottlenecks
Integration into Vmware ecosystem
• Comprehensive
operational data for
SDDC analytics
• Physical network visibility
for app and VM
performance
• Predefined alerts and
recommended actions
• Simplifies root-cause
analysis
vRealize Operations/
Log Insight
vCenter
• Physical network
access
for virtual workloads
• High-performance
bridging across
data center
fabrics
• Simple operations
with a single point of
integration
NSX Gateway
• VM-to-LUN storage
visibility and
monitoring
• Performance visibility
and forwarding
(including events)
• Empowers VI admins to
help identify
bottlenecks
L2 Gateway for VMWare NSX
This is how XYZ Account Unifies virtual
and physical networks, allowing
virtualized workloads to access resources
on physical networks. (VXLAN Tunnel
EndPoint) is a logical interface (VMkernel)
connects to TZ for encap/decap VXLAN
traffic
Logical VTEP:
q Availability - Up to 4 Active-Active
VTEP delivering Fast convergence
q Control: Simplifies operations through a
single point of integration and
provisioning with VMware NSX.
q Logical chassis: Single VTEP
configuration and uniform VNI-to-VLAN
mapping for the fabric
VXLAN
Compute Rack
Virtualized
Workloads
Brocade VCS
Gateway for
VMware NSX
Servers/Blades
10G
Physical
Workloads
VCS
Fabric
SPINE
LEAF
VLAN 10G
ExampleTopology:VTEPatSpinewithVCS
L2 Gateway for VMWare NSX
This is how XYZ Account Unifies virtual
and physical networks, allowing
virtualized workloads to access resources
on physical networks. (VXLAN Tunnel
EndPoint) is a logical interface (VMkernel)
connects to TZ for encap/decap VXLAN
traffic
Logical VTEP:
q Availability - Up to 4 Active-Active
VTEP delivering Fast convergence
q Control: Simplifies operations through a
single point of integration and
provisioning with VMware NSX.
q Logical chassis: Single VTEP
configuration and uniform VNI-to-VLAN
mapping for the fabric
VXLAN
Compute Rack
Virtualized
Workloads
Brocade VCS
Gateway for
VMware NSX
Servers/Blades
10G
Physical
Workloads
VCS
Fabric
SPINE
LEAF
VLAN 10G
ExampleTopology:VTEPatSpinewithVCS
Software
Hardware
Data Center Virtualization Layer
NSX is AGNOSTIC to Underlay
L2 or L3 or Any Combination
Only TWO Requirements
IP ConnectivityMTU of 1600
NSX is AGNOSTIC to Underlay
L2 or L3 or Any Combination
Only TWO Requirements
IP ConnectivityMTU of 1600
Routing/NAT
Firewall
Load Balancing
L2/L3 VPN
DHCP/DNS relayDDI
VM VM VM VM VM