Contenu connexe
Similaire à Scalable midsize data center designs (20)
Scalable midsize data center designs
- 3. “Scalable Midsize Data Center Designs”
So what exactly do we mean?
SCALABLE:
–
–
Right-sizing the Data Center, not just large scale.
Using components that will also transition easily
into larger designs.
MIDSIZE:
–
–
–
–
Requiring a dedicated pair of DC switches.
The transition point upwards from collapsed-core.
Separate Layer 2/3 boundary, with DC-oriented
feature set.
Layer-2 edge switching for virtualization.
Client
Access/Enterpr
ise
WAN/Internet
Edge
L3
----------L2
Data Center
DATA CENTER DESIGNS:
–
–
Topology options from single to dual-tier data
centers.
Tradeoffs of components to fill topology roles.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
3
- 4. Session Agenda
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Nexus 5500, 6000, 7000-based
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs
– Best Practices with FabricPath
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
4
- 5. Midsize Data Center Goals and Challenges
Provide example designs which:
Allow growing organizations to take advantage of the same Nexus Data Center
features and innovations as larger organizations.
Balance cost with port density, software features, and hardware capabilities.
Allow rapid growth of the network as needs change reuse of components in new roles.
Provide ease of implementation and operational simplicity for organizations
managing a growing network with a small staff.
Choose features to prioritize when making design choices:
Leaf/Access Features: Robust FEX options, Enhanced vPC, 10GBASE-T support,
Unified Ports (Native Fibre Channel), FCoE, Adapter-FEX, VM-FEX
Spine/Aggregation Features: 40 Gig-E, Routing Scale, OTV, MPLS, HA, VDC’s
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
5
- 6. Growth with Investment protection
Re-Use of key components as the design scales
Single layer expands
to dual-tier with
additional switch pair
to form
aggregation/access
design
Scalable Spine/Leaf DC Fabric
Dual Tier DC
Single Layer DC Models
Easily scale the fabric by adding switches
Add Spine switches to scale fabric bandwidth
Add Leaf switches to scale edge port density
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
6
- 7. Server Edge Requirements Drive Design Choices
Virtualization Requirements
Form Factor
– Unified Computing Fabric
– 3rd Party Blade Servers
– Rack Servers (Non-UCSM)
Storage Protocols
– vSwitch/DVS
– Nexus 1000v
– VM-FEX HW Switching
NIC Connectivity Model
– 10 or 1-GigE Server ports
– Physical Interfaces per server
– NIC Teaming models
– Fibre Channel
– FCoE
– iSCSI, NAS
FC
iSCSI
NFS/
CIFS
FCoE
VM VM VM
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
VM VM VM
Cisco Public
7
- 8. Communications Fabric Requirements
Varied “North-South”
communication needs with users
and external entities.
Flexibility to support multiple
protocols and connectivity types.
High Throughput and low
latency requirements.
Cloud
Site B
Offsite DC
DATA
CENTER
FABRIC
FC
FCoE
iSCSI / NAS
Data Storage
Services
Increasing availability
requirements.
BRKDCT-2218
Enterprise
Network
Mobile
NORTH - SOUTH TRAFFIC
Increasing “East-West”
communication needs: clustered
applications and workload
mobility.
Internet
Server/Compute
EAST – WEST TRAFFIC
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
8
- 9. Session Agenda
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Nexus 5500, 6000, 7000-based
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs
– Best Practices with FabricPath
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
9
- 10. Access Pod basics: Compute, Storage, and Network
“Different Drawing, Same Components”
To Data Center
Aggregation or
Network Core
Access/Leaf
Switch Pair
UCS Fabric
Interconnect
System
Storage Array
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
10
- 11. Access Pod Features: Virtual Port Channel (vPC)
Port-Channels allow aggregation of multiple
physical links into a logical bundle.
vPC allows Port-channel link aggregation to
span two separate physical switches.
With vPC, Spanning Tree Protocol is no
longer the primary means of loop prevention
Physical Topology
Logical Topology
Virtual Port Channel
Provides more efficient bandwidth utilization
since all links are actively forwarding
L2
Si
Si
vPC maintains independent control and
management planes
Two peer vPC switches are joined together
to form a “domain”
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Non-vPC
Cisco Public
vPC
11
- 12. Access Pod Features: Nexus 2000 Fabric Extension
Using FEX provides Top-of-Rack presence in
more racks with reduced points of
management, less cabling, and lower cost.
In a “straight-through” FEX configuration, each
Nexus 2000 FEX is only connected to one
parent switch.
End/Middle of Row Switching with FEX
Supported straight-through FEX parent switch
is Nexus 7000, 6000 or 5000 Series.
Nexus Parent
Switch
The same Fabric Extension technology is
used between UCS FI and I/O Modules in
blade chassis.
Nexus 2000
FEX
See current platform FEX scale numbers on
cisco.com under configuration guides.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Dual NIC
Active/Standby
Server
Cisco Public
Dual NIC 802.3ad
Server
12
- 13. Access Pod Features: Enhanced vPC (EvPC)
In an Enhanced vPC configuration any
server NIC teaming configuration will be
supported on any port. No ‘orphan ports’
in the design.
All components in the network path are
fully redundant.
Nexus Parent
Switch
Every port has a
physical path to
both parent
switches
Nexus 2000
FEX
Supported dual-homed FEX parent switch
is Nexus 6000 or 5500.
Provides flexibility to mix all three server
NIC configurations (single NIC,
Active/Standby and NIC Port Channel).
Single NIC
Dual NIC
Active/Standby
Dual NIC
802.3ad
Note, Port Channel to active/active server is
standard port channel, not configured as “vPC”.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
13
- 14. Access Pod Features: Unified Ports and FCoE
Any Unified Port can be configured as:
Unified Port allows a physical port to
be configured to support either native
Fibre Channel or Ethernet.
Fibre
Channel
Traffic
SFP+ optic needs to be chosen to
support the setting of the port
Fibre Channel over Ethernet (FCoE)
allows encapsulation and transport of
Fibre Channel traffic over a shared
Ethernet network
Traffic may be extended over MultiHop FCoE, or directed to an FC SAN
or
Nexus 5500 Ethernet/FC Switches
SAN-B
SAN-A
FC
Fibre Channel
Disk Array
FEX
SAN “A” / “B” isolation is maintained
across the network
BRKDCT-2218
Fibre
Channel
Fibre
Channel
Traffic
Ethernet
© 2013 Cisco and/or its affiliates. All rights reserved.
FCoE
Links
Servers with CNA
Cisco Public
14
- 15. Planning Physical Data Center Pod Requirements
Plan for growth in a modular, podbased repeatable fashion.
Your own “pod” definition may be
based on compute, network, or
storage requirements.
(2) N5548UP
PATCH
Storage
Arrays
How many current servers/racks
and what is the expected growth?
Map physical Data Center needs
to a flexible communication
topology.
Nexus switching at Middle or End
of Row will aggregate multiple
racks of servers with FEX.
BRKDCT-2218
(2)N2232
FEX
Term Svr,
Mgt Switch
Today’s
Server
Racks
Network/Storage
Rack
(32) 1RU
Rack
Servers
Compute
Rack
Tomorrow’s
Data Center
Floor
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
15
- 16. Data Center Service Integration Approaches
Data Center Service Insertion Needs
Firewall, Intrusion Prevention
Application Delivery, Server Load Balancing
Network Analysis, WAN Optimization
Physical Service Appliances
Typically introduced at Layer 2/3 Boundary or
Data Center edge.
Traffic direction with VLAN provisioning,
Policy-Based Routing, WCCP.
Use PortChannel connections to vPC.
Static Routed through vPC, or transparent.
Network
Core
Physical DC
Service Appliances
(Firewall, ADC/SLB,
etc.)
Virtualized Services
Deployed in a distributed manner along with
virtual machines.
Traffic direction with vPath and Nexus 1000v.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Virtual DC
Services in
Software
L3
----------L2
VM VM VM
VM VM VM
Virtualized Servers with
Nexus 1000v and vPath
Cisco Public
16
- 17. Session Agenda
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Nexus 5500, 6000, 7000-based
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs
– Best Practices with FabricPath
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
17
- 18. Single Layer Data Center, Nexus 5500
Dedicated Nexus 5548-based DC switch pair (or 5596 for higher port count)
Client Access
• Unified Ports support native Fibre
Channel, FCoE, iSCSI or NAS Storage.
WAN / DCI
Campus
• Non-blocking, line-rate 10Gpbs Layer-2
switching with low latency.
• Up to 160Gbps Layer-3 switching
capacity with L3 daughter card.
L3
----------L2
Nexus 5548
• Nexus 5500 supports physical FEX,
Adapter-FEX, VM-FEX capabilities.
FC
FCoE
iSCSI / NAS
Notes:
OTV/LISP DCI services may be provisioned on
separate Nexus 7000 or ASR 1000 WAN Routers
ISSU not supported with Layer-3 module in 5500
switches
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
10-GigE
UCS C-Series
1Gig/100M
Servers
Nexus 2000
10 or 1-Gig attached
UCS C-Series
Cisco Public
18
- 19. Single Layer Data Center, Nexus 6001
Dedicated Nexus 6001-based Data Center switch pair
Client Access
Nexus 6001 benefits:
WAN / DCI
Integrated line-rate layer-3
Native 40-Gig capability
Low ~1us switch latency at Layer-2/3
Up to 16 full-rate SPAN sessions
Greater 10GigE port density (at 1-RU)
Campus
L3
----------L2
Nexus 6001
Example Design Components:
2 x Nexus 6001, Layer-3 and Storage Licenses
4 x Nexus 2232PP/TM-E (10-GigE FEX)
2 x Nexus 2248TP-E (1-GigE FEX)
Note: Native Fibre Channel storage support would require
separate Nexus 5500 or MDS SAN
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
FCoE
iSCSI / NAS
10-GigE
UCS C-Series
1Gig/100M
Servers
Nexus 2000
10 or 1-Gig attached
UCS C-Series
Cisco Public
19
- 20. Single Layer Data Center plus UCS Fabric
Alternate Server Edge 1: UCS Fabric Interconnects with Blade and Rack Servers
• Typically 4 – 8 UCS Chassis per Fabric
Interconnect pair. Maximum is 20.
Client Access
WAN / DCI
Campus
• UCSM can also manage C-Series servers
through 2232PP FEX to UCS Fabric.
• Dedicated FCoE uplinks from UCS FI to
the Nexus 6001 for FCoE SAN Access
• Nexus switching layer provides inter-VLAN
routing, upstream connectivity, and
storage fabric services.
L3
----------L2
Nexus 6001
FCoE
iSCSI / NAS
UCS Fabric
Interconnects
• Example DC Switching Components:
2 x Nexus 6001
Layer- 3 and Storage Licensing
2 x Nexus 2232PP/TM-E
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Nexus 2000
UCS B-Series
Chassis
Cisco Public
UCSM managed
C-Series
20
- 21. Single Layer Data Center plus B22 FEX
Alternate Server Edge 2: HP, Dell, or Fujitsu Blades Example with B22 FEX
Client Access
• B22 FEX allows Fabric Extension directly
into compatible 3rd-party chassis.
WAN / DCI
Campus
• Provides consistent network topology for
multiple 3rd-party blade systems and nonUCSM rack servers.
• FCoE-based storage, or Nexus 5500/MDS
SAN connected to server HBA’s
L3
----------L2
Nexus 6001
FCoE
iSCSI / NAS
• Example Components:
2 x Nexus 6001
L3 and Storage Licensing
4 x Nexus B22
• Server totals vary based on optional use
of additional FEX.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
UCS C-Series
Cisco B22 FEX for
Blade Chassis
Access
Cisco Public
21
- 22. Flexible Design with Access Pod Variants
Mix and match Layer-2 compute connectivity for migration or scale requirements
Nexus switching and FEX
provide operational
consistency
UCS Managed
Blade and Rack
Rack Server
Access with FEX
B22 FEX with 3rd
Party Blade Servers
3rd Party Blades with
PassThru and FEX
More features, highest value and physical consolidation
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
22
- 23. Single Layer Data Center, Nexus 6004
Positioned for rapid scalability and a 40-GigE Fabric
Client Access
Nexus 6004 Benefits:
Includes 48 40-GigE QSFP or 192 10-GigE
ports, up to 96 40-GigE or 384 10-GigE
Integrated line-rate layer-3
Native 40-Gig switch fabric capability
Low ~1us switch latency at Layer-2/3
Line-rate SPAN at 10/40 GigE
WAN / DCI
Campus
L3
----------L2
Nexus 6004
Example Components:
2 x Nexus 6004, 24 40G or 96 10G ports active
L3 and Storage Licensing
8 x Nexus 2232PP/TM-E
Note: FCoE, iSCSI, NAS storage are supported at
release, 6004 native FC module on the roadmap.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
FCoE
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
Cisco Public
23
- 24. Single Layer Data Center, Nexus 7004
Highly Available Virtualized Chassis Access/Aggregation Model
Benefits of Nexus 7004 with F2 I/O Modules:
Supervisor High Availability
Layer-2/3 ISSU
Virtual Device Contexts (VDC)
PBR, WCCP for service integration
FabricPath support for future expansion
Client Access
WAN / DCI
Campus
L3
----------L2
Nexus 7004
Example Components:
2 x Nexus 7004, dual SUP-2/2e
Dual F248XP or XT (96x10G per chassis)
Layer-3 Licensing
4 x Nexus 2232PP
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
Note: For native Fibre Channel or FCoE add Nexus 5500
access layer or MDS SAN.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
24
- 26. Virtual Port Channel and Layer-2 Optimizations
What features to enable?
• Autorecovery: Enables a single vPC peer
to bring up port channels after power
outage scenarios
vPC Domain:
• autorecovery
• vpc peer switch
• Orphan Port Suspend: Allows non-vPC
ports to fate-share with vPC, enables
consistent behavior for Active/Standby
NIC Teaming
• vPC Peer Switch: Allows vPC peers to
behave as a single STP Bridge ID (not
required with vPC+
• Unidirectional Link Detection (UDLD):
Best practice for fiber port connectivity to
prevent one-way communication (use
“normal” mode)
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Identify Orphan
Ports for
Active/Standby
Teaming
Dual NIC
802.3ad
Dual NIC
Active/Standby
Cisco Public
26
- 27. Virtual Port Channel and Layer-3 Optimizations
What features to enable?
•
•
vPC Domain:
• Peer gateway
• ip arp sync
vPC and HSRP: Keep HSRP timers at defaults,
vPC enables active/active HSRP forwarding
vPC Peer Gateway: Allows the peers to respond to
the HSRP MAC, as well as the physical MAC’s of
both peers.
•
Layer-3 Peering
IP ARP Synchronize: Proactively synchronizes the
ARP table between vPC Peers over Cisco Fabric
Services (CFS)
L3
----------L2
vPC Domain
•
Layer-3 Peering VLAN: Keep a single VLAN for
IGP peering between N5k/6k vPC peers on the peer
link. (On N7k can also use a separate physical link)
•
Bind-VRF: Required on Nexus 5500, 6000 for
multicast forwarding in a vPC environment. (Not
required if using vPC+ with FabricPath)
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
27
- 28. Session Agenda
Midsize Data Center Requirements
– Goals and Challenges
– Fabric Requirements
Starting Point: The Access Pod
– Compute and Storage Edge Requirements
– Key Features
Single Pod Design Examples
– Nexus 5500, 6000, 7000-based
– vPC Best Practices
Moving to a Multi-Tier Fabric
– Spine/Leaf Designs
– Best Practices with FabricPath
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
28
- 29. Migration from single to dual-layer switching
Moving from single to
dual-layer models:
•
Larger switches (Nexus 7000,
6004) more suited to
becoming spine/aggregation.
L3
----------L2
Nexus 7004 or 6004
Single Layer
•
Smaller switches (Nexus
5500, 6001) more suited to
becoming leaf/access.
•
Starting with larger switches
eliminates need to move SVI’s
(Layer-3 gateway.)
•
Aggregation switches can
support access switch
connections, FEX, and directattached servers during
migration.
BRKDCT-2218
Nexus 5500 or 6001
Single Layer
© 2013 Cisco and/or its affiliates. All rights reserved.
Nexus 7004 or 6004 Spine plus 6001
or 5500 Leaf, Dual Layer DC
Cisco Public
29
- 30. Oversubscription: Balancing Cost and Performance
Oversubscription:
Most servers will not be consistently filling a 10 GigE
interface.
A switch may be line-rate non-blocking, but still
introduce oversubscription into an overall topology by
design.
Consider Ethernet-based storage traffic when
planning ratios, don’t be overly aggressive.
Example device-oriented numbers with all
ports active:
Nexus 6001: 48x10Gig + 4x40Gig uplink, 48:16 or 3:1
topology-level oversubscription.
Nexus 2232PP FEX: 32x10Gig + 8x10Gig uplink, 32:8
or 4:1 topology-level oversubscription.
Actual oversubscription can be controlled by how
many ports and uplinks are physically connected.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
3:1
4:1
Cisco Public
30
- 31. Working with 40 Gigabit Ethernet
Nexus 6000 and 7000 support QSFP-based
40 Gigabit Ethernet interfaces.*
Nexus 6004 at FCS provides QSFP ports
only, but splitter cables can be used to
provision 4x10GigE ports out of 1 QSFP
QSFP-40G-CR4
direct-attach cables
40 Gigabit Ethernet cable types:
• Direct-attach copper [QSFP <-> QSFP] and [QSFP <-> 4 x
SFP+]. Passive cables at 1/3/5m, active cables at 7 and 10m.
• SR4 uses bit-spray over 4 fiber pairs within a 12 fiber
MPO/MTP connector to reach up to 100/150m on multimode
OM3/OM4
• CSR4 is a higher powered SR4 optic with reach up to
300/400m on multimode OM3/OM4
• LR4 uses CWDM to reach up to 10km on a single-mode fiber
pair.
* Verify platform-specific support of specific optics/distances
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
QSFP+ to 4-SFP+
direct-attach cables
(splitter)
QSFP-40G-SR4 with direct MPO and 4x10
MPO-to-LC duplex splitter fiber cables
Cisco Public
31
- 32. Value of FabricPath/vPC+ in Spine/Leaf Designs
Using FabricPath with a traditional DC Topology
Aggregation becomes Spine
Access becomes Leaf
FabricPath Benefits:
Ease of configuration
Completely eliminates STP from running
between Leaf and Spine
No Orphan Port isolation on Access (Leaf)
switch vPC Peer-link loss
Improved Multicast support, no “bind-vrf”
needed (N5500/6000), also adds PIM-SSM
capability
Greater flexibility for future growth and
change in the topology
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Spine
FabricPath
Leaf
vPC+
FEX
VM VM VM
UCS Rack Servers
Cisco Public
32
- 33. Dual Layer Nexus 6000 Data Center
40-GigE Optimized low-latency switch fabric
Data Center switching control plane distributed
over Dual Layers.
Aggregation (Spine): Layer-3 and services
boundary, scale point to expand fabric.
Access (Leaf): Physical TOR or FEX aggregation
point, Layer-2 virtualization services.
WAN /
DCI
L3
----------L2
Nexus 6004
Spine
FCoE
iSCSI / NAS
Multi-hop FCoE with dedicated links.
Nexus 6001
Leaf
Example Components:
2 x Nexus 6004, 2 x Nexus 6001
Layer-3 and Storage Licensing
12 x Nexus 2232PP/TM-E
Enable FabricPath between tiers for
configuration simplicity and future scale.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
10 or 1-Gig attached
UCS C-Series
Cisco Public
33
- 34. Dual Layer Nexus 6000 Data Center, Expansion
40-GigE Optimized low-latency switch fabric
Data Center switching control plane distributed
over Dual Layers.
Aggregation (Spine): Layer-3 and services
boundary, scale point to expand fabric.
Access (Leaf): Physical TOR or FEX aggregation
point, Layer-2 virtualization services.
WAN / DCI
L3
----------L2
Nexus 6004
Spine
FCoE
iSCSI / NAS
Multi-hop FCoE with dedicated links.
Nexus 6001
Leaf
Example Components:
2 x Nexus 6004, 4 x Nexus 6001
Layer-3 and Storage Licensing
24 x Nexus 2232PP/TM-E
Enable FabricPath between tiers for
configuration simplicity and future scale.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Rack Server Access
with FEX
Cisco Public
Rack Server Access
with FEX
34
- 35. Dual Layer Nexus 7004 Data Center
High Availability switching fabric with chassis-based Spine
Data Center switching control plane distributed
over Dual Layers.
Aggregation (Spine): Layer-3 and services
boundary, scale point to expand fabric.
Access (Leaf): Physical TOR or FEX aggregation
point, Layer-2 virtualization services.
WAN / DCI
L3
----------L2
FCoE
iSCSI / NAS
FCoE support on dedicated links and VDC.
Nexus 5500
or 6001
Leaf
Example Components:
2 x Nexus 7004, 4 x Nexus 5548
Layer-3, VDC and Storage Licensing
24 x Nexus 2232PP/TM-E
Enable FabricPath between tiers for
configuration simplicity and future scale.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Rack Server Access
with FEX
Cisco Public
Rack Server Access
with FEX
35
- 36. Dual Layer Nexus 7000/6000 Fabric with VDCs
Virtual Device Contexts partitioning the physical switch
WAN
Nexus 7009 FabricPath Spine
Highly Available design with dual-supervisor
Add leaf pairs for greater end node connectivity
Add spine nodes for greater fabric scale and HA
FCoE support over dedicated links and VDC
Core
VDC
L3
----------L2
OTV
VDC
Spine
VDC
Storage
VDC
FCoE
iSCSI / NAS
Nexus 7000 Series Benefits:
Integrated DCI support with OTV, LISP, and
MPLS
Feature-rich switching fabric with VDC’s, FEX,
vPC, FabricPath, FCoE
Nexus 7000 Service Module capability starting
with Network Analysis Module (6.2/Freetown)
Investment protection of a chassis-based solution
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Nexus 6001
Leaf
Rack Server Access
with FEX
Cisco Public
Rack Server Access
with FEX
36
- 37. FabricPath with vPC+ Best Practices Summary
•
Manually assign physical and emulated switch ID’s
to easily identify switches for operational support.
•
Configure all leaf switches with STP root priority, or
use pseudo-priority to control STP.
•
Use vPC+ at the Leaf for port channels, and also at
the Layer-2/3 spine to provide active/active HSRP.
•
FabricPath
SW-ID: 102
• VPC Domain 100
• FP Emulated
SW-ID 100
L3
----------L2
Ensure all access VLANs are “mode fabricpath” to
allow forwarding over the vPC+ peer-link which is a
FabricPath link.
•
FabricPath
SW-ID: 101
Set FabricPath root-priority on the Spine switches
for multi-destination trees
•
• VPC Domain 10
• FP Emulated SWID 10
Convergence optimizations:
Set linkup-delay timer to 60 seconds
Set isis spf-interval 50 50 50, lsp-gen-interval 50 50 50
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
37
- 38. Summary: Scalable Midsize Data Center Designs
• Midsize Data Centers can benefit from the
same technology advances as larger ones.
• Increase the stability of larger Layer-2
workload domains using vPC, FabricPath,
and vPC+.
• Start with a structured approach that allows
modular design as requirements grow.
• Evaluate Nexus switching options based on
feature support, scale, and performance.
• Plan ahead for re-use of components in
new roles as needs change.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
38
- 39. Complete Your Online Session Evaluation
Give us your feedback and
you could win fabulous prizes.
Winners announced daily.
Receive 20 Cisco Daily Challenge
points for each session evaluation
you complete.
Complete your session evaluation
online now through either the mobile
app or internet kiosk stations.
Maximize your Cisco Live experience with your
free Cisco Live 365 account. Download session
PDFs, view sessions on-demand and participate in
live activities throughout the year. Click the Enter
Cisco Live 365 button in your Cisco Live portal to
log in.
BRKDCT-2218
© 2013 Cisco and/or its affiliates. All rights reserved.
Cisco Public
39