Like a tsunami, the proliferation of data threatens to overwhelm the capacity of the average storage area network (SAN). According to a 2014 IDC study, worldwide data will grow from 4.4 zettabytes today to a whopping 44 zettabytes by 2020.
How do you create a SAN that can efficiently handle a 10-fold increase in data in just 5 years without breaking your IT budget?
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Designing a SAN for the Next Decade
1. Designing and Deploying
Converged Storage Area
NetworksBhavin Yadav, Technical Marketing Engineer
Storage Networking Technical Marketing Team
April 2015
5. Performs encapsulation of FC
frames for FCoE transport and
de-encapsulation of FCoE
frames for FC transport.
What Is FCoE?
It’s Fibre Channel
• From a Fibre Channel standpoint it’s
• FC connectivity over a new type of cable called… Ethernet
• From an Ethernet standpoint it’s
• Yet another ULP (Upper Layer Protocol) to be transported
FC-0 Physical Interface
FC-1 Encoding
FC-2 Framing & Flow Control
FC-3 Generic Services
FC-4 ULP Mapping
Ethernet Media Access Control
Ethernet Physical Layer
FC-2 Framing & Flow Control
FC-3 Generic Services
FC-4 ULP Mapping
FCoE Logical End Point
FCP
EthernetFC
FCP
FCoE
6. Frame Format
SOF
Frame
Header
Data Field CRC EOF
Standard FC Frame
Standard FCoE Frame
FCoE
Header
Frame
Header
Data Field CRC EOFEth
Header
FCS
Standard FC Frame
7. 7
Standards for FCoE
FCoE is fully defined in FC-BB-5 standard
FCoE works alongside additional technologies to make I/O Consolidation a reality
T11 IEEE 802.1
FCoE
FC on
other
network
media
FC on Other
Network
Media
FC-BB-5
PFC ETS DCBX
802.1Qbb
DCB
802.1Qaz 802.1Qaz
Lossless
Ethernet
Priority
Grouping
Configuration
Verification
Published in May, 2010 Published in Fall 2011FCOE is finalized in all
standard Bodies
9. Key Design Aspects
• Multi-Tenancy and Virtualization: VSAN
• High Availability / Redundancy: Port Channels
• Scalability: Over-Subscription, NPV vs. Switch mode, Smart Zoning
• Flexibility and Performance: ISLs and Uplinks using 10G FCoE and
8G/10G FC
11. Port Channels are a link aggregation technology that provide increased
ISL/Uplink scalability while providing increased HA
• Up to 16 ISLs/Uplinks per Port Channel
• Port Channel remains active until all links fail
• Load balancing using SID, DID and OXID
• Port Channel members can span ASICs, port groups and line cards
• Port Channels can carry multiple VSANs (i.e. Trunking)
HA / Fault Tolerance using Port Channels
12. Understanding Over-Subscription
• Fan-In Ratio / Fan-Out Ratio
• Port Over-Subscription
• Bandwidth Over-Subscription
• Host to Edge
• Edge to Core
• Core to Storage
• Depends on
• Application Type
• Application IOPs
13. • Has “members”
• Recommendation: 1-1 zoning
• Any one can talk to any other member
• Each pair consumes an ACL entry in TCAM
• Result: n*(n-1) entries
• Inefficient Management
• Wasted Resources
Zoning for Scalability
i2
t2
i1
t1
Many to Many
i2 t2
i1 t1
One to One
14. • Intelligent:
• Uses Device Types to Create only I-T ACLs
• Resource Efficiency
• Optimized TCAMs
• Reduced Zone DB size
• Easy Management
• One Zone, One Change
• Cross-talk elimination
• Increased Fabric Scalability
How does Smart-Zoning help?
i2
t2
i1
t1
X
15. • Scalability
• Each fabric/blade Switch uses a single Domain ID
• Theoretical maximum Domain IDs per fabric is 239
• Supported number of domains is typically smaller ~ 40-80
• Manageability
• More FC domains / switches to manage
• Shared management of blade switches between storage and server
administrators
Fabric Scalability Challenges
Explosion with Fabric, Blade and TOR Switches
16. • NPV switch does not require Domain ID and does not participate in
FC Control plane
• NPV switch acts as a host aggregator to upstream NPIV FC or FCoE
switch
• Supported on MDS Fabric and Nexus switches (MDS 9148S, Nexus 5500, 5600)
• Transparent Interoperability with other vendors
• Reduces number of switches to manage
Scalability using NPV mode
17. ISLs to meet High Performance
Building High Performance ISLs using 8G / 10G FC and 10G FCoE
Protocol Clocking (Gbps)
Encoding
(data/sent)
Data Rate
Gbps MB/s
8G FC 8.500 8b/10b 6.8 850
10G FC 10.51875 64b/66b 10.2 1275
10G FCoE 10.3125 64b/66b 10.0 1250
18. Fibre Channel (FC) Fibre Channel over Ethernet (FCoE)
MDS NX-OS
6.2.5 & earlier
MDS NX-OS
6.2.7 Phase-1
MDS NX-OS
6.2.9 Phase-2
N7x NX-OS 6.2(2) N7x NX-OS Gibraltar
Flogi per line card 500 1,000 1,000 500 1,000
Flogi per switch 2,500 4,000 4,000 2,500 4,000
Zone members 16,000 30,000 30,000 16,000 30,000
Zones 8,000 16,000 16,000 8,000 16,000
Logins per fabric 10,000 20,000 20,000 10,000 20,000
FCNS entries per fabric 10,000 20,000 20,000 Not part of Matrix 20,000
No of Domains 60 60 80 60 80
NPV switches/ NPIV Core
Switch
105 105 105 105 105
Device Alias 8,000 8,000 12,000 8,000 12,000
No. of VSANs per fabric 80 80 80 80 (VLAN) 80 (VLAN)
No of Zone set per switch 500 500 500 500 500
Scale Numbers on MDS 9700 and Nexus 7x
20. • Fibre Channel Design
• Edge - Core
• Edge - Core - Edge
• Hybrid Design
• Top-of-Rack / End-of-Row FC with Core (FC / FCoE)
• Top-of-Rack / End-of-Row FCoE with Core (FC / FCoE)
• FCoE Design
• Multi-hop FCoE
• Dynamic FCoE
Best Practice Designs
21. Network and Fabric SAN Design
Two ‘or’ Three Tier Topology
• “Edge-Core” or “Edge-Core-
Edge” Topology
• Servers connect to the edge
switches
• Storage devices connect to one
or more core switches
• HA achieved in two physically
separate, but identical,
redundant SAN fabric
22. • Switch Mode: All FC Control Plane
features – VSAN, Zones, Domain ID, etc.
• NPV Mode:
• No FC Control plane participation
• Limitation of NPV Switches (105)
• Port pinning
Top-of-Rack / End-of-Row Design
23. • Multi-Protocol Support
• Future Proof
• Similar to edge core design
• Nexus switch in switch mode or npv
mode (preferred)
• Connectivity to MDS with either FC
or FCoE
FC Design: Hybrid (FC with FCoE)
24. • Edge-Core-Edge Design
• Nexus 7K acting as core switches
• LAN and SAN
• Storage, Hosts (UCS) are FCoE
• FCoE traverses from host to Storage
Multi-hop FCoE Design
25. • Using Nexus 2000 as Top of Rack
• Connected to N7K Core
• Using FCoE
UCS and Cisco SANs
• Using Nexus FI as ToR
• Connected to MDS or
N5K/6K/7K Core
• Using FC / FCoE
31. • Consistent design principles
• Easily Scalable
• Built-In Redundancy
• Improved Performance
• Simplified Management
• Multi-Tenancy & Multi-Protocol Support
Network Convergence using Cisco SANs
32.
33. • Whitepapers:
• 16G Platform: http://www.cisco.com/c/en/us/products/storage-
networking/mds-9700-series-multilayer-directors/white-paper-listing.html
• 8G Platform: http://www.cisco.com/c/en/us/products/storage-
networking/mds-9500-series-multilayer-directors/white-paper-listing.html
• Hyperlinks:
• End-to-End FCoE Design Guide
• Fibre Channel over Ethernet (FCoE)
• Large SAN Design Best Practices Using Cisco MDS 9700 Series
Multilayer Directors
• Large SAN Design Best Practices Using Cisco MDS 9500 Series and 9710
Multilayer Directors
Reference Material
34. • Customer Case Studies
• Listing Page
• MDS 9700: http://www.cisco.com/c/en/us/products/storage-networking/mds-9700-
series-multilayer-directors/case-study-listing.html
• MDS 9500: http://www.cisco.com/c/en/us/products/storage-networking/mds-9500-
series-multilayer-directors/case-study-listing.html
• Case Studies
• Claims Management Company Makes 10-Year SAN Investment
• Empowering Education and Advanced Research
• Boeing Boosts Network Performance While Reducing Costs
• Making NetApp Engineering Network Compatible with Future Systems
• Health Care System Meets Storage Demands with Converged Infrastructure
Reference Material