SlideShare une entreprise Scribd logo
1  sur  24
Télécharger pour lire hors ligne
Document # TECHBRIEF2013005 v19 November, 2014 Copyright 2014© IT Brand Pulse. All rights reserved. 
Where IT perceptions are reality 
Technology Brief 
Blade Server I/O and 
Workloads of the Future 
Comparing 
Cisco UCS 
HP BladeSystem 
HP FlexFabric 
20Gb 2-Port Adapters 
provided by Emulex
Document # TECHBRIEF2013005 v19 November, 2014 Page 2 of 24 
New Generation of Blade Servers and Workloads, Same HP Advantage 
HP and Cisco are the two most popular blade server brands on the planet. A big reason why is the networks embedded in the HP BladeSystem and Cisco UCS products are the most powerful and flexible networks for virtualized workloads. 
On August 28th, HP announced new HP ProLiant Gen9 servers, including several enhancements to their HP BladeSystem I/O design. Shortly afterwards, on September 4th, Cisco announced long-awaited enhancements to UCS. 
The UCS enhancements centered around the UCS Mini blade system which is targeted at SMBs and the edge of the enterprise. There were no significant changes to the 5108 chassis used for larger systems, which after 5 years, is getting long in the tooth. With only 1.2Tb/s of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers and single links greater than 10Gb. 
The new HP BladeSystem c7000 Platinum chassis offers 7TB/s of mid-plane bandwidth, with new support for 20GbE downlinks as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale-out file storage traffic. 
The new HP and Cisco blade systems are hitting the market just as hyperscale-driven applications and data center architectures are reaching the enterprise. Our conclusion? There’s a new generation of blade servers and workloads, but the same HP advantage. 
This Report Compares 3 Facets of Cisco UCS and HP BladeSystem I/O 
To set the stage for comparing the capabilities that will matter most in the future, this Technology Brief reviews the trend towards a new mix of applications and server workloads in Webscale private clouds. 
Executive Summary 
2 
1 
3 
Performance 
Consolidation 
Flexibility 
I/O Capabilities Which Will Differentiate Blade Servers in Webscale Environments
Document # TECHBRIEF2013005 v19 November, 2014 Page 3 of 24 
Intel Xeon E5-2600 v3 
In 2014, the server industry reached a major inflection point with the introduction of a new generation of Intel server processors launched v3 of the Xeon E5-2600 family. At this inflection point, x86 server product lines are being refreshed, and new technologies are being introduced which complement the capabilities of the Xeon E5-2600. 
Complementary Technologies are what Differentiate Blade Server Offerings 
Given that HP and Cisco blade systems will feature the same Xeon E5-2600 processor, it’s the complementary technologies which will differentiate the systems. The factors which are expected to separate leaders from followers, is 20GbE connectivity to servers, 40GbE uplinks from blade server chassis to network, switchless connectivity to storage, and convergence of Ethernet, FCoE, native Fibre Channel, RDMA, and cloud tunneling protocols on the same port. Servers with the best implementations of these technologies will be better suited to handle traditional workloads, plus a new class of Webscale workloads. 
Inflection Point 
Blade Server Differentiation 
Hierarchical Networks 
LAN/SAN Convergence with FCoE 
10GbE 
20GbE and 40GbE 
Virtual Networks 
Converged cloud , RDMA , FC and Ethernet Connectivity 
Virtualized 
Servers 
Webscale 
Servers
Document # TECHBRIEF2013005 v19 November, 2014 Page 4 of 24 
Share Everything Applications + Share Nothing Applications 
Enterprise IT organizations, who for the most part have become private cloud builders, are blending traditional Enterprise and Hyperscale IT into a Webscale model. Traditional IT encompasses support for workloads such as SQL databases, and ERP applications, with “share-everything” infrastructure featuring many VMs sharing physical servers, and many servers sharing networked storage. 
Webscale IT must support traditional workloads as well as a new generation of workloads such as NoSQL databases and predictive analytics. Many of the new applications are designed to run in “share-nothing” distributed computing environments featuring scale-out server and storage clusters. 
Private cloud builders are also trending towards cloud platforms like OpenStack and vCloud. Cloud operating systems incorporate a software defined data center architecture which allows a single cloud operating system to manage servers, storage and networking systems in different data centers. As a result, new cloud tunneling protocols, such as VXLAN and NVGRE, are being deployed as a software defined datacenter foundation, along with a new generation of NICs which can offload the tunnel protocol processing. 
Workload Mix of the Future 
Data centers built with a Webscale architecture support traditional workloads in a share everything environment and new workloads in a distributed environment. 
Traditional IT + Hyperscale IT = Webscale IT
Document # TECHBRIEF2013005 v19 November, 2014 Page 5 of 24 
The Environment for Workloads of the Future 
The defining characteristic of a Webscale Private Cloud is data center infrastructure which efficiently supports two distinctly different application environments — a shared infrastructure environment and a distributed infrastructure environment. A Webscale Private Cloud also includes an overlapping environment with software defined (virtualized) servers, networking and storage. 
Converged Networks Make it Possible 
A key capability of blade servers in a Webscale Private Cloud is a higher level of network convergence. In the next generation of 2.0 Converged Networks, the RDMA network protocol for scale-out clusters, and hardware offload of tunneling protocol processing for carrying L2 traffic over L3 networks, are integrated as standard features in Webscale CNAs and/or switches. 
Webscale Private Cloud 
Webscale Private Cloud Environment 
Shared environments include servers heavily loaded with virtual machines, and networked storage shared by many servers. Distributed environments support database and application workloads spread across many servers, and scale-out storage. Cloud operating platforms such as vCloud and OpenStack are introducing management tools for a software defined data center, including software defined networks.
Document # TECHBRIEF2013005 v19 November, 2014 Page 6 of 24 
Anatomy of Blade Server I/O 
Ethernet and Fibre Channel uplinks to LANs and SANs 
Ethernet and Fibre Channel downlinks to mid- plane and server adapters 
Embedded switches and/or pass-through modules 
Mid-plane 
Ethernet LAN-on- Motherboard (LOM) adapters 
Converged Network Adapter (CNA) or Fibre Channel Mezzanine Adapters 
Blade Server Chassis 16 Blade Servers and 4 Switches in Chassis 1 LOM adapter on each Server and 1 Mezzanine Adapter on each Server 
Application Performance Depends on a Healthy Network 
Every blade server has an entire network embedded to carry east-west traffic between servers, and north- south traffic to top-of-rack, end-of-row, and core switches upstream. The I/O performance of applications running on blade servers can differ significantly depending on the capabilities of their embedded networks.
Document # TECHBRIEF2013005 v19 November, 2014 Page 7 of 24 
The Blade Servers 
Cisco UCS and HP BladeSystem 
In the following pages we will compare the performance, network convergence, flexibility and software defined networking of the Cisco UCS in a 5108 chassis, and the HP BladeSystem in a c7000 Platinum chassis. 
Blade Server Systems 
Cisco UCS in 5108 Chassis 
HP BladeSystem 
in c7000 Chassis 
The Products 
Chassis Size 
6U 
10U 
Max. Blade Servers 
8 
16 
Mid-plane Bandwidth 
1.2Tb/s 
7.168 Tb/s 
Server Downlinks 
10Gb 
20Gb 
Chassis Uplinks 
10Gb 
10/40Gb 
Interconnect Options 
Ethernet/FCoE 
Ethernet/FCoE, Native Fibre Channel, SAS, InfiniBand 
I/O Slots 
2 
8
Document # TECHBRIEF2013005 v19 November, 2014 Page 8 of 24 
Comparing I/O Performance 
Why it Matters 
Meeting application performance service levels is directly related to the I/O performance of a blade server system. In addition, the new generation of servers with Xeon E5-2600 processors hosting a generation of demanding new applications, need higher bandwidth and lower latency I/O than ever before. And in Webscale private cloud environments, performance is needed more cost-effectively than ever before, bringing CPU efficiency to the forefront of important performance metrics. 
I/O Performance Metrics 
In the following pages, we will examine the capabilities of Cisco UCS and HP BladeSystem against the following I/O performance metrics: 
 Bandwidth 
 Useable Bandwidth 
 Latency 
 CPU Efficiency 
1
Document # TECHBRIEF2013005 v19 November, 2014 Page 9 of 24 
80GbE is Specmanship 
There are some discussions in the blogosphere about how UCS achieves 80Gb of bandwidth per blade. Based on a the Cisco UCS B200 M4 Blade Server Spec Sheet for details, that scenario refers to the configuration of a Cisco B200 M4 blade with a VIC1340 adapter and added mezzanine card (port expander) that allows four 10Gb links to each IO Module (2208 FEX) for a total of 80Gb of bandwidth (2 x 4 x 10Gb). 
40GbE is Expensive 
From the point of view of pure technology, 40GbE is a perfect solution for delivering the performance needed in a single server link, and eliminating the need for teaming. But the cost per port for 40GbE network adapters may be up to 3x the cost per port of 10GbE adapters. In another case of specmanship, Cisco is promoting the availability of a 40Gb port on the new 6324 Fabric Interconnect (FI) for the USC Mini. However, as of the writing of this report, the 40G port, called a Scalability Port, is not a native 40GbE port and can only be used to breakout to four 1GbE or 10GbE SFP+ (4x1G or 4 x10G) connections. In addition, this 40GbE port requires an expensive software license to activate. 
20GbE is Juuust Right 
A choice that has only recently been made available to server architects is 20GbE. Each 20GbE ports offers bandwidth equivalent to twenty 1GbE ports or two 10GbE ports. 
20GbE is juuuust right because a single 20GbE port is enough bandwidth for all but the most I/O intensive supercomputing applications, and is available for a fraction of the price of 40GbE technology. 
According to the Cisco UCS B200 M4 Blade Server Spec Sheet all Cisco UCS 5108 midplane, FEX and FI network connectivity ports are currently 10GbE, including the 40Gb scalability port on the 6324 FI which must be split into multiple 10GbE ports. 
The HP BladeSystem provides 20GbE links between blade server adapters and the chassis interconnects, as well as inter-switch links. With HP Flex-20 technology, Ethernet network adapters deliver twice the bandwidth of 10Gb adapters, while reducing the management overhead associated with multiple 10Gb adapters. 
With 20Gb downlinks, HP Virtual Connect FlexFabric-20/40 F8 Modules offer more than twice the throughput of other 10Gb extenders and fabric interconnects. In addition, ports on the HP Virtual Connect FlexFabric-20/40 F8 Modules can be dynamically configured to support Ethernet, Fibre Channel, or FCoE. 
I/O Bandwidth
Document # TECHBRIEF2013005 v19 November, 2014 Page 10 of 24 
Oversubscription 
Almost no Oversubscription with HP BladeSystem 
Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch ports. The oversubscription ratio is the sum of the capacity of the adapter ports, divided by the capacity of the chassis interconnect ports. Below you can see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100% improvement in oversubscription when compared to Cisco. 
8 ports x 10Gb from Mid-Plane x 2 IO Modules = 160 Gb 
8 ports x 10Gb x 2 IO Modules = 160Gb 
4 ports x 10Gb from VICs and 4 ports x 10Gb from expansion cards (80Gb) x 8 Servers = 640Gb 
16 ports x 20Gb from Mid-plane to 4 x HP Virtual Connect Modules = 1,280Gb 
4 HP Virtual Connect Modules. Each with 4 x 40Gb ports + 8 x 10Gb ports + 2 x20Gb ISL ports = 1,120Gb 
2 ports x 20Gb from FLOM + 2 ports x 20Gb for Mezz. Card x 16 Servers = 1,280Gb 
HP BladeSystem: Oversubscription = 1.1:1 
Cisco UCS: Oversubscription = 4:1
Document # TECHBRIEF2013005 v19 November, 2014 Page 11 of 24 
What Oversubscription Means 
Blade Server I/O Hits The Wall 
If you configured 80Gb of bandwidth per blade on both a Cisco UCS and HP BladeSystem, the Cisco 5108 chassis interconnects are oversubscribed with the second server. In contrast, fifteen HP blade servers can be configured before reaching the bandwidth limit of the HP BladeSystem c7000 Platinum chassis interconnects. 
Number of Blade Servers It Takes to Hit the Limit of Chassis Interconnect Bandwidth 
1.12 Tb/s 
Chassis 
Interconnect 
Bandwidth 
160Gb/s 
Chassis 
Interconnect 
Bandwidth 
Two fully configured UCS blade servers hit the limits of the 5108 fabric extenders (FEX). It takes fifteen fully configured HP ProLiant Gen 9 blade servers to hit the bandwidth limit of the HP FlexFabric Modules.
Document # TECHBRIEF2013005 v19 November, 2014 Page 12 of 24 
RDMA over Ethernet (RoCE) 
RDMA over Converged Ethernet (RoCE) 
InfiniBand networks were invented to overcome the need to plow through the Ethernet protocol stack to complete an I/O transaction. InfiniBand boosts performance by eliminating layers of the stack for Remote Direct Memory Access (RDMA). The Ethernet industry responded by developing an enhanced version of Ethernet called Converged Ethernet (CE), featuring Priority Flow Control which is necessary to support RDMA over Converged Ethernet (RoCE). Blade systems with switches supporting CE, and with NICs supporting RDMA, can deliver I/O with lower latency and less CPU usage than previous generations of CNAs. 
HP ProLiant Gen9 blade servers incorporate 20Gb FlexibleLOM NICs which are RDMA NICs. Cisco has introduced RDMA LOM and Mezz NICs called the VIC 1340 and VIC 1380, respectively. 
I/O Without RDMA 
I/O With RDMA
Document # TECHBRIEF2013005 v19 November, 2014 Page 13 of 24 
RoCE Blade Environment 
Networked Storage Killer Apps for RoCE 
A killer app for RoCE is SMB 3.0 file servers where users accessing shared storage experience the response time of local storage. File servers turbo-charged with RoCE are commercially available via two Windows Server 2012 features called SMB Multi-Channel and SMB Direct. With SMB Multichannel, SMB 3.0 automatically detects the RDMA capability and creates multiple RDMA connections for a single session. This allows SMB to use the high throughput, low latency and low CPU utilization offered by SMB Direct. 
HP FlexFabric 20Gb adapters (RDMA NICs) are certified by Microsoft for use in the killer app described above. As of 11/14/14 the VIC 1340 is not certified by Microsoft for SMB Direct. 
Three Hyper-V Clusters and One File Server Cluster Using RDMA 
In this diagram a single HP BladeSystem with HP 6125XLG Ethernet Blade Switches required to support RoCE, is a high performance environment for 3 app clusters and 1 file server cluster. Hyper-V automatically senses the presence of RDMA NICs, then use multi-channel communications to evacuate VMs in seconds, and uses direct memory access for higher I/O to shared storage inside the blade server.
Document # TECHBRIEF2013005 v19 November, 2014 Page 14 of 24 
Performance Benefits of RoCE 
IOPS, IOPS per Watt, and Response Time Better with RoCE 
In testing performed in a Windows Storage Server environment using SMB Direct and RoCE, we were able to demonstrate better performance, efficiency and response time compared to last generation technology. 
Server Power Efficiency (IOPs per Watt) 
The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, delivered 80% higher server power efficiency than adapters not using RoCE. 
Sequential Read Performance (IOPs) 
The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, provided 82% more IOPs than previous generation adapters without RoCE. 
Read I/O Response Time (Seconds) 
The HP FlexFabric 20Gb 2-port 650FLB Adapter with RoCE (Emulex OCe14102) , used with Windows Storage Server and SMB Direct, reduced I/O response time by 70% compared to NICs without SMB Direct capabilities.
Document # TECHBRIEF2013005 v19 November, 2014 Page 15 of 24 
The Cost Benefits of RoCE Offload 
Hardware Offload 
A key to achieving efficient use of processing power is adapter offload of networking protocols so that application server CPU cycles are not wasted on network protocol processing. Using a software initiator instead of hardware offload requires that every TCP/IP, FCoE, and iSCSI packet be sent over the PCI bus to the NIC. A constant PCI bus busy state can interfere with traffic to other devices on the PCI bus. 
The lack of offload can have a big impact on CPU utilization. For example, a single adapter running an iSCSI software initiator can utilize 30% of the server CPU for iSCSI protocol processing. Add more adapters and VMs, and more CPU is needed for network protocol processing. 
The lack of offload is expensive. The cost of 30% CPU utilization for a $20,000 server is $6,000 — a cost that can be easily avoided by simply deploying a network adapter with iSCSI offload. 
Cisco UCS 1300 Series VIC adapters support TCP, FCoE , NVGRE, VXLAN and RoCE offload. HP FlexFabric adapters add to that offload for iSCSI. It is worth noting that at the time this report was written, HP 20Gb adapter VXLAN offload is certified by VMware, while as of 11/14/14 the Cisco VIC 1340/1380 VXLAN offload does not appear on the VMware Compatibility Guide. 
The Lack of Offload Can be Expensive 
Cost of using server for protocol processing @ 30% CPU utilization 
There are a variety of different network protocols supported by adapters, and many are used simultaneously. The more protocol processing that is done in the adapter, the more of your server investment can be applied to applications - instead of network protocol processing.
Document # TECHBRIEF2013005 v19 November, 2014 Page 16 of 24 
Why it Matters 
IT consolidation is hugely important because it represents less hardware and simplified management. The utilization of storage media leaped when storage was configured in a SAN and could be shared by many servers. The utilization of physical servers dramatically increased when multiple virtual servers could be hosted on a single physical server. Similarly, network utilization increases when more network protocols can run on a single cable, adapter or switch. 
Consolidation Metrics 
There are two metrics for I/O consolidation: the convergence of network protocols, and the consolidation of cables into higher bandwidth links. 
 Network Convergence 
 Cable Consolidation 
Comparing I/O Consolidation 
2 
Convergence of Network Protocols 
2.0 
2014: Cluster/SDN Convergence 
1.0 
2008: LAN/SAN Convergence 
IP 
iSCSI 
Converged Ethernet 
(FCoE, Priority Flow Control) 
IP 
Converged Ethernet 
iSCSI 
FCoE 
RDMA 
NVGRE 
VXLAN 
At the Xeon E5-2600 inflection point, specialized adapters will no longer be needed to support RDMA. The new class of adapters will also support new tunneling protocols which are essential components of software defined data centers.
Document # TECHBRIEF2013005 v19 November, 2014 Page 17 of 24 
Wanted: One Blade Server Network for LANs, SANs, Cluster Networks and SDN 
A new best practice for data center managers is to converge traditional shared computing infrastructure with their growing infrastructure for distributed apps. This is made possible by a new generation of network adapters and switches with support for the RDMA, VXLAN and NVGRE protocols. Support for these protocols enables blade servers to converge LANs, SANs, Cluster networks and software defined networks (SDN) in a single environment. It also allows data center managers to use software defined data center tools. 
The HP 20Gb FlexibleLOM adapters supports stateless hardware offload of TCP, iSCSI and FCoE protocols for LAN/SAN convergence, as well as hardware offload of RDMA, VXLAN and NVGRE for efficient support of cluster and tunnel traffic. The Cisco VIC1340 supports all of the same protocols, with hardware offload for all of the above except iSCSI. 
Network Convergence 2.0 a Perfect Fit for a Webscale Private Cloud 
The added support for RDMA over Converged Ethernet, NVGRE and VXLAN allow one adapter port on a blade server to support four network environments. Hardware offload allow the blade server to use precious CPU resources for applications, instead of for network protocol processing. 
Network Convergence 
Shared 
Distributed 
SDN
Document # TECHBRIEF2013005 v19 November, 2014 Page 18 of 24 
A Single 40Gb Link Eliminates Cables for 40 x 1Gb Links or 4 x 10Gb Links 
Until recently, 40GbE was used mostly for inter-switch connectivity and in the core of the network. The availability of 40GbE ports on servers sitting on the edge of the network has presented the opportunity for IT pros to consolidate dozens of 1GbE links and handfuls of 10GbE links with a single cable. This is an area where the HP BladeSystem stands out. 
The Cisco UCS architecture makes extensive use of teaming of 10Gb ports to build uplinks with higher bandwidth. That means lots of cables. Even the 40Gb port on the UCS Mini must be split into four cables. In contrast, the HP Virtual Connect Modules on the HP BladeSystem include four 40GbE ports, which in the apple-to-apples comparison below reduced the number of cables needed from 24 to 2. 
Configuring Redundant 40Gb Uplinks for 16 Blade Servers 
This diagram shows an apples-to-apples comparison of a 16 blade servers configured with redundant connections between servers and switches, and redundant uplinks. Many more cables are needed in the Cisco UCS configuration because the switches are external, and because of the lack of 40Gb ports. Note the Cisco Mini has a 40Gb port but it can only be used in a 4 x 10GbE configuration. 
Cable Consolidation 
Cisco UCS (24 cables) 
HP (2 cables) 
4 x 10Gb 
1 x 40Gb
Document # TECHBRIEF2013005 v19 November, 2014 Page 19 of 24 
Why it Matters 
A new era of agility awaits IT organizations who implement cloud operating systems designed to manage multiple software defined data centers. Years required for a generation of hardware change will be replaced by months required to deploy a software update. A foundation for this capability is overlay networks with tunneling of L2 traffic across data centers using L3 networks. Support for tunneling protocols is embedded in a new class of network adapters making it easy for private cloud builders to integrate their servers into a cloud platform. 
Conversely, IT organizations want to continue using native Fibre Channel SANs and want the flexibility to choose “if” and “when” they converge LANs and SANs on Ethernet. 
I/O Flexibility Metrics 
There are two capabilities which are expected to effect I/O flexibility in Webscale private clouds. 
 More efficient delivery of tunnel traffic with hardware offload of tunnel protocol processing 
 Support for native Fibre Channel 
Comparing I/O Flexibility 
3
Document # TECHBRIEF2013005 v19 November, 2014 Page 20 of 24 
Live Migrations a Killer App for VXLAN and NVGRE 
One of the most valuable functions of server virtualization is live migration. This function frees system administrators from the time-consuming and complex process of moving workloads to optimize performance or mitigate a hardware failure. However, moving VMs on different networks requires extensive network reconfiguration. IT organizations using data center infrastructure dispersed in public, private or hybrid clouds simply can’t configure all servers and VMs on one local network, and need a tunneling mechanism to extend live migrations. 
Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE ) are protocols for deploying overlay (virtual) networks on top of a Layer 3 networks. VXLAN and NVGRE are used to isolate apps and tenants in a cloud and migrate virtual machines across long distances. 
While VXLAN and NVGRE allow live migrations across racks and data centers. RoCE accelerates live migrations. In a Microsoft TechEd demo, migrating Windows Server 2012 to a like system takes just under 1 minute 26 seconds. Windows Server 2012R2 performed the same migration in just over 32 seconds. Then using RoCE during the live migration process combined with SMB Direct, it took just under 11 seconds, without utilizing added CPU resources. 
Overlay Network Tunnel 
Overlay Network Tunnel 
Tunneling Unlocks The Cloud 
Efficient use of the cloud requires protocols allowing the creation of virtual networks, and allowing Layer 2 network services to traverse Layer 3 networks without network configuration. 
Live Migrations Across the Cloud
Document # TECHBRIEF2013005 v19 November, 2014 Page 21 of 24 
Storage Networks 
Support for Native Fibre Channel Needed for I/O Flexibility 
Based on IT Brand Pulse surveys, 40% of IT organizations are not converging with FCoE. For the 40% of IT professionals who have been too busy to look at FCoE, or who say they have no plans to converge their LANs and SANs, parallel Ethernet and Fibre Channel infrastructure will be deployed. 
The modular design of blade servers make them inherently flexible. But not all blade server platforms are equal when it comes to hosting multiple heterogeneous virtualized workloads and delivering I/O flexibility. 
The Cisco UCS blade servers support Ethernet/FCoE connectivity. 
The flexible HP BladeSystem supports Ethernet/FCoE, SAS, InfiniBand and Fibre Channel connectivity. 
Wanted: Parallel Ethernet & Fibre Channel Networks 
In 2014, the prevalent data center network architecture remains a parallel network architecture, including a mix of specialized NIC, iSCSI, and Fibre Channel host adapters, as well as Ethernet and Fibre Channel switched fabrics. Cisco UCS blade servers support only Ethernet connectivity. Adoption of FCoE technology is required to access installed Fibre Channel resources.
Document # TECHBRIEF2013005 v19 November, 2014 Page 22 of 24 
Advantage HP 
Based on the Three 
The goal of this paper was to examine the features expected to differentiate the performance, consolidation and flexibility of Cisco UCS and HP BladeSystem in Webscale environments. In our review, the advantage goes to HP BladeSystem. The table below highlights key differences between the two blade systems. 
Blade Server Systems 
Cisco UCS in 5108 Chassis 
HP BladeSystem 
in c7000 Chassis 
The Products 
Chassis Size 
6U 
10U 
Max. Blade Servers 
8 
16 
Mid-plane Bandwidth 
1.2Tb/s 
7.16Tb/s 
Max. Embedded Switches 
2 
8 
Support for native 20Gb Ethernet 
No 
Yes 
Support for native 40Gb Ethernet 
(not including 40Gb port used only in 4 x 10Gb mode) 
No 
Yes 
Support for native Fibre Channel 
No 
Yes 
Support for native InfiniBand 
No 
Yes 
Over subscription 
4:1 
1.1:1 
Hardware offload: 
Fibre Channel over Ethernet (FCoE) 
Yes 
Yes 
iSCSI 
No 
Yes 
TCP offload engine (TOE) 
Yes 
Yes 
RoCE offload engine (ROE) 
Yes (not qualified with SMB Direct as of 11/14/14) 
Yes 
VXLAN offload engine (VOE) 
Yes (not qualified by VMware as of 11/14/14) 
Yes 
NVGRE offload engine (NOE) 
Yes 
Yes 
Source 
Cisco 
HP
Document # TECHBRIEF2013005 v19 November, 2014 Page 23 of 24 
HP ProLiant Gen9 Blade Server 
Designed for Workloads of the Future 
The HP ProLiant Gen9 Blade Server is designed for I/O flexibility with a choice of HP FlexFabric converged networking or parallel Ethernet and Fibre Channel networks. The HP ProLiant Gen9 Blade Server is also fully compliant with Windows Server 2012 Virtual Fibre Channel—an innovation that will play an important role in the virtualization of Tier-1 workloads with Microsoft Hyper-V. 
HP ProLiant Gen9 Blade Servers in a c7000 Enclosure 
HP Virtual Connect FlexFabric 20/40 F8 module supports “FlatSAN” direct connectivity to native Fibre Channel 3PAR storage at a lower cost than using Fibre Channel switches 
HP Virtual Connect FlexFabric 20/40 F8 module supports LAN, SAN, NAS, iSCSI and FCoE connectivity 
 Native Fibre Channel server adapter 
 Over 12 million ports shipped on this stack 
 Complete enterprise OS support 
 Ethernet LAN on Motherboard (LOM) or Mezz adapter 
 Dual 10/20GbE Ports 
 Supports LAN, NAS, iSCSI and FCoE connectivity 
 Supports RoCE for scale-out cluster connectivity. 
 Supports NVGRE and VXLAN for migrating VMs across the cloud. 
718203-B21 HP LPe1605 16Gb Fibre Channel HBA 
HP FlexFabric 20Gb 2-port 650FLB Adapter
Document # TECHBRIEF2013005 v19 November, 2014 Page 24 of 24 
Resources 
Summary 
Infrastructure of the past is functionally defined and purpose-built. Servers are servers, networking is networking and storage is storage. These purpose-built devices are deployed with little ability to change the function as needs change. In the future, infrastructure needs to be more transformative, taking the shape of business demands. 
Potential power and flexibility is locked inside the aging Cisco UCS 5108 chassis which severely limits the use of new high-bandwidth networks and any network other than Ethernet/FCoE. 
The new HP BladeSystem answers the call with: 
• A new level of convergence which will allow for resources to be allocated at a very granular level, improving efficiencies and ensuring optimal performance as workload demands change. 
• Interfaces to the software-defined data center. HP ProLiant Gen9 blade servers possess the capability to respond to intelligent orchestration of infrastructure resources in real-time, as applications and user needs change. 
• A cloud-ready architecture ready to scale-out, agile, and always on. 
• Workload-optimized for traditional share-everything applications and new share-nothing applications. 
Related Links 
OCe14000 Test Report 
HP FlexFabric Adapters Provided by Emulex 
HP BladeSystem 
HP Virtual Connect Technology 
HP BladeSystem and Cisco UCS Comparison 
Cisco Fabric Extender 
Cisco UCS Virtual Interface Card 1340 
Cisco UCS 6324 Fabric Interconnect Data Sheet 
Cisco UCS Ethernet Switching Modes 
IT Brand Pulse 
About the Author 
Joe Kimpler is a senior analyst responsible for IT Brand Pulse Labs. Joe’s team manages the delivery of technical services including hands-on testing, product reviews, total cost of ownership studies and product launch collateral. He has over 30 years of experience in information technology and has held senior engineering and marketing positions at Fujitsu, Rockwell Semiconductors, Quantum and QLogic. Joe holds an engineering degree from the University of Illinois and a MBA in marketing.

Contenu connexe

Tendances

10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 Kangaroot
 
Blazing Fast Lustre Storage
Blazing Fast Lustre StorageBlazing Fast Lustre Storage
Blazing Fast Lustre StorageIntel IT Center
 
Dell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresDell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresPrincipled Technologies
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCisco Canada
 
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...Principled Technologies
 
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic Principled Technologies
 
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...Principled Technologies
 
IBM Power9 Features and Specifications
IBM Power9 Features and SpecificationsIBM Power9 Features and Specifications
IBM Power9 Features and Specificationsinside-BigData.com
 
Boost your work with hardware from Intel
Boost your work with hardware from IntelBoost your work with hardware from Intel
Boost your work with hardware from IntelPrincipled Technologies
 
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...
Ensure greater uptime and boost VMware vSAN cluster  performance with the Del...Ensure greater uptime and boost VMware vSAN cluster  performance with the Del...
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...Principled Technologies
 
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Principled Technologies
 
Migrate VMs faster with a new Dell EMC PowerEdge MX solution
Migrate VMs faster with a new Dell EMC PowerEdge MX solutionMigrate VMs faster with a new Dell EMC PowerEdge MX solution
Migrate VMs faster with a new Dell EMC PowerEdge MX solutionPrincipled Technologies
 
The Importance of Fast, Scalable Storage for Today’s HPC
The Importance of Fast, Scalable Storage for Today’s HPCThe Importance of Fast, Scalable Storage for Today’s HPC
The Importance of Fast, Scalable Storage for Today’s HPCIntel IT Center
 
Software-Defined Storage (SDS)
Software-Defined Storage (SDS)Software-Defined Storage (SDS)
Software-Defined Storage (SDS)HTS Hosting
 
Get higher transaction throughput and better price/performance with an Amazon...
Get higher transaction throughput and better price/performance with an Amazon...Get higher transaction throughput and better price/performance with an Amazon...
Get higher transaction throughput and better price/performance with an Amazon...Principled Technologies
 
MongoDB Sharding
MongoDB ShardingMongoDB Sharding
MongoDB Shardinguzzal basak
 

Tendances (19)

Application Report
Application ReportApplication Report
Application Report
 
10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16
 
Blazing Fast Lustre Storage
Blazing Fast Lustre StorageBlazing Fast Lustre Storage
Blazing Fast Lustre Storage
 
Dell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresDell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructures
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid Cloud
 
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...
Keep remote desktop power users productive with Dell EMC PowerEdge R840 serve...
 
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic
Migrate VMs faster with a new Dell EMC PowerEdge MX solution - Infographic
 
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
SQL Server 2016 database performance on the Dell EMC PowerEdge FC630 QLogic 1...
 
IBM Power9 Features and Specifications
IBM Power9 Features and SpecificationsIBM Power9 Features and Specifications
IBM Power9 Features and Specifications
 
Boost your work with hardware from Intel
Boost your work with hardware from IntelBoost your work with hardware from Intel
Boost your work with hardware from Intel
 
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...
Ensure greater uptime and boost VMware vSAN cluster  performance with the Del...Ensure greater uptime and boost VMware vSAN cluster  performance with the Del...
Ensure greater uptime and boost VMware vSAN cluster performance with the Del...
 
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...
Run compute-intensive Apache Hadoop big data workloads faster with Dell EMC P...
 
Migrate VMs faster with a new Dell EMC PowerEdge MX solution
Migrate VMs faster with a new Dell EMC PowerEdge MX solutionMigrate VMs faster with a new Dell EMC PowerEdge MX solution
Migrate VMs faster with a new Dell EMC PowerEdge MX solution
 
The Importance of Fast, Scalable Storage for Today’s HPC
The Importance of Fast, Scalable Storage for Today’s HPCThe Importance of Fast, Scalable Storage for Today’s HPC
The Importance of Fast, Scalable Storage for Today’s HPC
 
Software-Defined Storage (SDS)
Software-Defined Storage (SDS)Software-Defined Storage (SDS)
Software-Defined Storage (SDS)
 
Ibm linux one
Ibm linux one Ibm linux one
Ibm linux one
 
Ds716+
Ds716+Ds716+
Ds716+
 
Get higher transaction throughput and better price/performance with an Amazon...
Get higher transaction throughput and better price/performance with an Amazon...Get higher transaction throughput and better price/performance with an Amazon...
Get higher transaction throughput and better price/performance with an Amazon...
 
MongoDB Sharding
MongoDB ShardingMongoDB Sharding
MongoDB Sharding
 

Similaire à Blade Server I/O and Workloads of the Future (report)

Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOIT Brand Pulse
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIT Brand Pulse
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate BrochureNetronome
 
Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)E. Balauca
 
Renaissance in vm network connectivity
Renaissance in vm network connectivityRenaissance in vm network connectivity
Renaissance in vm network connectivityIT Brand Pulse
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityIT Brand Pulse
 
Harnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V EngineHarnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V EngineIT Brand Pulse
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate BrochureCarly Steele
 
InfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfInfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfbui thequan
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureCisco Canada
 
Dell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE ServersDell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE ServersIT Brand Pulse
 
Application Report: Migrating from Discrete to Virtual Servers
Application Report: Migrating from Discrete to Virtual ServersApplication Report: Migrating from Discrete to Virtual Servers
Application Report: Migrating from Discrete to Virtual ServersIT Brand Pulse
 
ONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityPaul Stevens
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCisco Canada
 
presentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSpresentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSdnarvarte2
 
Industry Brief: Tectonic Shift - HPC Networks Converge
Industry Brief: Tectonic Shift - HPC Networks ConvergeIndustry Brief: Tectonic Shift - HPC Networks Converge
Industry Brief: Tectonic Shift - HPC Networks ConvergeIT Brand Pulse
 
brocade-data-center-fabric-architectures-wp
brocade-data-center-fabric-architectures-wpbrocade-data-center-fabric-architectures-wp
brocade-data-center-fabric-architectures-wpAnuj Dewangan
 
Demonstrating q logic ethernet performance leadership
Demonstrating q logic ethernet performance leadershipDemonstrating q logic ethernet performance leadership
Demonstrating q logic ethernet performance leadershipPoulSmith
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internetxKinAnx
 

Similaire à Blade Server I/O and Workloads of the Future (report) (20)

Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IO
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
 
Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)
 
Renaissance in vm network connectivity
Renaissance in vm network connectivityRenaissance in vm network connectivity
Renaissance in vm network connectivity
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network Connectivity
 
Harnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V EngineHarnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V Engine
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
 
InfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdfInfiniBand in the Enterprise Data Center.pdf
InfiniBand in the Enterprise Data Center.pdf
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network Infrastructure
 
Dell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE ServersDell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE Servers
 
Application Report: Migrating from Discrete to Virtual Servers
Application Report: Migrating from Discrete to Virtual ServersApplication Report: Migrating from Discrete to Virtual Servers
Application Report: Migrating from Discrete to Virtual Servers
 
ONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperabilityONP 2.1 platforms maximize VNF interoperability
ONP 2.1 platforms maximize VNF interoperability
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid Cloud
 
presentacion comercial de CISCO UCS
presentacion comercial de CISCO UCSpresentacion comercial de CISCO UCS
presentacion comercial de CISCO UCS
 
Nerworking es xi
Nerworking es xiNerworking es xi
Nerworking es xi
 
Industry Brief: Tectonic Shift - HPC Networks Converge
Industry Brief: Tectonic Shift - HPC Networks ConvergeIndustry Brief: Tectonic Shift - HPC Networks Converge
Industry Brief: Tectonic Shift - HPC Networks Converge
 
brocade-data-center-fabric-architectures-wp
brocade-data-center-fabric-architectures-wpbrocade-data-center-fabric-architectures-wp
brocade-data-center-fabric-architectures-wp
 
Demonstrating q logic ethernet performance leadership
Demonstrating q logic ethernet performance leadershipDemonstrating q logic ethernet performance leadership
Demonstrating q logic ethernet performance leadership
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
 

Plus de IT Brand Pulse

CXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdfCXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdfIT Brand Pulse
 
2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey ReportIT Brand Pulse
 
2020 Storage Brand Leader Report
2020 Storage Brand Leader Report2020 Storage Brand Leader Report
2020 Storage Brand Leader ReportIT Brand Pulse
 
2019 Servers Brand Leader Report
2019 Servers Brand Leader Report2019 Servers Brand Leader Report
2019 Servers Brand Leader ReportIT Brand Pulse
 
Industry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaSIndustry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaSIT Brand Pulse
 
2019 Storage Brand Leader Report
2019 Storage Brand Leader Report2019 Storage Brand Leader Report
2019 Storage Brand Leader ReportIT Brand Pulse
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020IT Brand Pulse
 
AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020IT Brand Pulse
 
2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey ReportIT Brand Pulse
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASIT Brand Pulse
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASIT Brand Pulse
 
2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey ReportIT Brand Pulse
 
2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader ReportIT Brand Pulse
 
2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader ReportIT Brand Pulse
 
Backup to Disk Infographic
Backup to Disk InfographicBackup to Disk Infographic
Backup to Disk InfographicIT Brand Pulse
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskIT Brand Pulse
 
2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-ReportIT Brand Pulse
 
2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-ReportIT Brand Pulse
 
2017 Networking & Scale-out Storage Brand Leader Mini-Report
2017 Networking & Scale-out Storage Brand Leader Mini-Report2017 Networking & Scale-out Storage Brand Leader Mini-Report
2017 Networking & Scale-out Storage Brand Leader Mini-ReportIT Brand Pulse
 

Plus de IT Brand Pulse (20)

CXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdfCXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdf
 
2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report
 
2020 Storage Brand Leader Report
2020 Storage Brand Leader Report2020 Storage Brand Leader Report
2020 Storage Brand Leader Report
 
2019 Servers Brand Leader Report
2019 Servers Brand Leader Report2019 Servers Brand Leader Report
2019 Servers Brand Leader Report
 
Industry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaSIndustry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaS
 
2019 Storage Brand Leader Report
2019 Storage Brand Leader Report2019 Storage Brand Leader Report
2019 Storage Brand Leader Report
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020
 
AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020
 
Application Report
Application ReportApplication Report
Application Report
 
2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
 
2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report
 
2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report
 
2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report
 
Backup to Disk Infographic
Backup to Disk InfographicBackup to Disk Infographic
Backup to Disk Infographic
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
 
2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report
 
2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report
 
2017 Networking & Scale-out Storage Brand Leader Mini-Report
2017 Networking & Scale-out Storage Brand Leader Mini-Report2017 Networking & Scale-out Storage Brand Leader Mini-Report
2017 Networking & Scale-out Storage Brand Leader Mini-Report
 

Dernier

Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...amitlee9823
 
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...gajnagarg
 
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...gajnagarg
 
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证tufbav
 
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...amitlee9823
 
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)amitlee9823
 
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)amitlee9823
 
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...gajnagarg
 
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...motiram463
 
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证tufbav
 
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...gajnagarg
 
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...amitlee9823
 
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...SUHANI PANDEY
 
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men 🔝Deoghar🔝 Escorts...
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men  🔝Deoghar🔝   Escorts...➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men  🔝Deoghar🔝   Escorts...
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men 🔝Deoghar🔝 Escorts...amitlee9823
 
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...drmarathore
 
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证ehyxf
 
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men 🔝Vijayawada🔝 E...
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men  🔝Vijayawada🔝   E...➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men  🔝Vijayawada🔝   E...
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men 🔝Vijayawada🔝 E...amitlee9823
 
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...amitlee9823
 

Dernier (20)

Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Bommasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Begusarai Escorts ☎️9352988975 Two shot with one gir...
 
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...
Just Call Vip call girls Bhiwandi Escorts ☎️9352988975 Two shot with one girl...
 
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证
怎样办理斯威本科技大学毕业证(SUT毕业证书)成绩单留信认证
 
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
Vip Mumbai Call Girls Andheri East Call On 9920725232 With Body to body massa...
 
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Call Girls Chickpet ☎ 7737669865☎ Book Your One night Stand (Bangalore)
 
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)
Escorts Service Sanjay Nagar ☎ 7737669865☎ Book Your One night Stand (Bangalore)
 
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...
Just Call Vip call girls Berhampur Escorts ☎️9352988975 Two shot with one gir...
 
CHEAP Call Girls in Ashok Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Ashok Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Ashok Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Ashok Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
(👉Ridhima)👉VIP Model Call Girls Mulund ( Mumbai) Call ON 9967824496 Starting ...
 
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证
怎样办理维多利亚大学毕业证(UVic毕业证书)成绩单留信认证
 
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...
Just Call Vip call girls godhra Escorts ☎️9352988975 Two shot with one girl (...
 
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...
Vip Mumbai Call Girls Kalyan Call On 9920725232 With Body to body massage wit...
 
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...
VVIP Pune Call Girls Gahunje WhatSapp Number 8005736733 With Elite Staff And ...
 
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men 🔝Deoghar🔝 Escorts...
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men  🔝Deoghar🔝   Escorts...➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men  🔝Deoghar🔝   Escorts...
➥🔝 7737669865 🔝▻ Deoghar Call-girls in Women Seeking Men 🔝Deoghar🔝 Escorts...
 
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
Abort pregnancy in research centre+966_505195917 abortion pills in Kuwait cyt...
 
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证
怎样办理圣芭芭拉分校毕业证(UCSB毕业证书)成绩单留信认证
 
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men 🔝Vijayawada🔝 E...
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men  🔝Vijayawada🔝   E...➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men  🔝Vijayawada🔝   E...
➥🔝 7737669865 🔝▻ Vijayawada Call-girls in Women Seeking Men 🔝Vijayawada🔝 E...
 
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men  🔝kakinada🔝   Escor...
➥🔝 7737669865 🔝▻ kakinada Call-girls in Women Seeking Men 🔝kakinada🔝 Escor...
 
CHEAP Call Girls in Mayapuri (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Mayapuri  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Mayapuri  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Mayapuri (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 

Blade Server I/O and Workloads of the Future (report)

  • 1. Document # TECHBRIEF2013005 v19 November, 2014 Copyright 2014© IT Brand Pulse. All rights reserved. Where IT perceptions are reality Technology Brief Blade Server I/O and Workloads of the Future Comparing Cisco UCS HP BladeSystem HP FlexFabric 20Gb 2-Port Adapters provided by Emulex
  • 2. Document # TECHBRIEF2013005 v19 November, 2014 Page 2 of 24 New Generation of Blade Servers and Workloads, Same HP Advantage HP and Cisco are the two most popular blade server brands on the planet. A big reason why is the networks embedded in the HP BladeSystem and Cisco UCS products are the most powerful and flexible networks for virtualized workloads. On August 28th, HP announced new HP ProLiant Gen9 servers, including several enhancements to their HP BladeSystem I/O design. Shortly afterwards, on September 4th, Cisco announced long-awaited enhancements to UCS. The UCS enhancements centered around the UCS Mini blade system which is targeted at SMBs and the edge of the enterprise. There were no significant changes to the 5108 chassis used for larger systems, which after 5 years, is getting long in the tooth. With only 1.2Tb/s of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers and single links greater than 10Gb. The new HP BladeSystem c7000 Platinum chassis offers 7TB/s of mid-plane bandwidth, with new support for 20GbE downlinks as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale-out file storage traffic. The new HP and Cisco blade systems are hitting the market just as hyperscale-driven applications and data center architectures are reaching the enterprise. Our conclusion? There’s a new generation of blade servers and workloads, but the same HP advantage. This Report Compares 3 Facets of Cisco UCS and HP BladeSystem I/O To set the stage for comparing the capabilities that will matter most in the future, this Technology Brief reviews the trend towards a new mix of applications and server workloads in Webscale private clouds. Executive Summary 2 1 3 Performance Consolidation Flexibility I/O Capabilities Which Will Differentiate Blade Servers in Webscale Environments
  • 3. Document # TECHBRIEF2013005 v19 November, 2014 Page 3 of 24 Intel Xeon E5-2600 v3 In 2014, the server industry reached a major inflection point with the introduction of a new generation of Intel server processors launched v3 of the Xeon E5-2600 family. At this inflection point, x86 server product lines are being refreshed, and new technologies are being introduced which complement the capabilities of the Xeon E5-2600. Complementary Technologies are what Differentiate Blade Server Offerings Given that HP and Cisco blade systems will feature the same Xeon E5-2600 processor, it’s the complementary technologies which will differentiate the systems. The factors which are expected to separate leaders from followers, is 20GbE connectivity to servers, 40GbE uplinks from blade server chassis to network, switchless connectivity to storage, and convergence of Ethernet, FCoE, native Fibre Channel, RDMA, and cloud tunneling protocols on the same port. Servers with the best implementations of these technologies will be better suited to handle traditional workloads, plus a new class of Webscale workloads. Inflection Point Blade Server Differentiation Hierarchical Networks LAN/SAN Convergence with FCoE 10GbE 20GbE and 40GbE Virtual Networks Converged cloud , RDMA , FC and Ethernet Connectivity Virtualized Servers Webscale Servers
  • 4. Document # TECHBRIEF2013005 v19 November, 2014 Page 4 of 24 Share Everything Applications + Share Nothing Applications Enterprise IT organizations, who for the most part have become private cloud builders, are blending traditional Enterprise and Hyperscale IT into a Webscale model. Traditional IT encompasses support for workloads such as SQL databases, and ERP applications, with “share-everything” infrastructure featuring many VMs sharing physical servers, and many servers sharing networked storage. Webscale IT must support traditional workloads as well as a new generation of workloads such as NoSQL databases and predictive analytics. Many of the new applications are designed to run in “share-nothing” distributed computing environments featuring scale-out server and storage clusters. Private cloud builders are also trending towards cloud platforms like OpenStack and vCloud. Cloud operating systems incorporate a software defined data center architecture which allows a single cloud operating system to manage servers, storage and networking systems in different data centers. As a result, new cloud tunneling protocols, such as VXLAN and NVGRE, are being deployed as a software defined datacenter foundation, along with a new generation of NICs which can offload the tunnel protocol processing. Workload Mix of the Future Data centers built with a Webscale architecture support traditional workloads in a share everything environment and new workloads in a distributed environment. Traditional IT + Hyperscale IT = Webscale IT
  • 5. Document # TECHBRIEF2013005 v19 November, 2014 Page 5 of 24 The Environment for Workloads of the Future The defining characteristic of a Webscale Private Cloud is data center infrastructure which efficiently supports two distinctly different application environments — a shared infrastructure environment and a distributed infrastructure environment. A Webscale Private Cloud also includes an overlapping environment with software defined (virtualized) servers, networking and storage. Converged Networks Make it Possible A key capability of blade servers in a Webscale Private Cloud is a higher level of network convergence. In the next generation of 2.0 Converged Networks, the RDMA network protocol for scale-out clusters, and hardware offload of tunneling protocol processing for carrying L2 traffic over L3 networks, are integrated as standard features in Webscale CNAs and/or switches. Webscale Private Cloud Webscale Private Cloud Environment Shared environments include servers heavily loaded with virtual machines, and networked storage shared by many servers. Distributed environments support database and application workloads spread across many servers, and scale-out storage. Cloud operating platforms such as vCloud and OpenStack are introducing management tools for a software defined data center, including software defined networks.
  • 6. Document # TECHBRIEF2013005 v19 November, 2014 Page 6 of 24 Anatomy of Blade Server I/O Ethernet and Fibre Channel uplinks to LANs and SANs Ethernet and Fibre Channel downlinks to mid- plane and server adapters Embedded switches and/or pass-through modules Mid-plane Ethernet LAN-on- Motherboard (LOM) adapters Converged Network Adapter (CNA) or Fibre Channel Mezzanine Adapters Blade Server Chassis 16 Blade Servers and 4 Switches in Chassis 1 LOM adapter on each Server and 1 Mezzanine Adapter on each Server Application Performance Depends on a Healthy Network Every blade server has an entire network embedded to carry east-west traffic between servers, and north- south traffic to top-of-rack, end-of-row, and core switches upstream. The I/O performance of applications running on blade servers can differ significantly depending on the capabilities of their embedded networks.
  • 7. Document # TECHBRIEF2013005 v19 November, 2014 Page 7 of 24 The Blade Servers Cisco UCS and HP BladeSystem In the following pages we will compare the performance, network convergence, flexibility and software defined networking of the Cisco UCS in a 5108 chassis, and the HP BladeSystem in a c7000 Platinum chassis. Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem in c7000 Chassis The Products Chassis Size 6U 10U Max. Blade Servers 8 16 Mid-plane Bandwidth 1.2Tb/s 7.168 Tb/s Server Downlinks 10Gb 20Gb Chassis Uplinks 10Gb 10/40Gb Interconnect Options Ethernet/FCoE Ethernet/FCoE, Native Fibre Channel, SAS, InfiniBand I/O Slots 2 8
  • 8. Document # TECHBRIEF2013005 v19 November, 2014 Page 8 of 24 Comparing I/O Performance Why it Matters Meeting application performance service levels is directly related to the I/O performance of a blade server system. In addition, the new generation of servers with Xeon E5-2600 processors hosting a generation of demanding new applications, need higher bandwidth and lower latency I/O than ever before. And in Webscale private cloud environments, performance is needed more cost-effectively than ever before, bringing CPU efficiency to the forefront of important performance metrics. I/O Performance Metrics In the following pages, we will examine the capabilities of Cisco UCS and HP BladeSystem against the following I/O performance metrics:  Bandwidth  Useable Bandwidth  Latency  CPU Efficiency 1
  • 9. Document # TECHBRIEF2013005 v19 November, 2014 Page 9 of 24 80GbE is Specmanship There are some discussions in the blogosphere about how UCS achieves 80Gb of bandwidth per blade. Based on a the Cisco UCS B200 M4 Blade Server Spec Sheet for details, that scenario refers to the configuration of a Cisco B200 M4 blade with a VIC1340 adapter and added mezzanine card (port expander) that allows four 10Gb links to each IO Module (2208 FEX) for a total of 80Gb of bandwidth (2 x 4 x 10Gb). 40GbE is Expensive From the point of view of pure technology, 40GbE is a perfect solution for delivering the performance needed in a single server link, and eliminating the need for teaming. But the cost per port for 40GbE network adapters may be up to 3x the cost per port of 10GbE adapters. In another case of specmanship, Cisco is promoting the availability of a 40Gb port on the new 6324 Fabric Interconnect (FI) for the USC Mini. However, as of the writing of this report, the 40G port, called a Scalability Port, is not a native 40GbE port and can only be used to breakout to four 1GbE or 10GbE SFP+ (4x1G or 4 x10G) connections. In addition, this 40GbE port requires an expensive software license to activate. 20GbE is Juuust Right A choice that has only recently been made available to server architects is 20GbE. Each 20GbE ports offers bandwidth equivalent to twenty 1GbE ports or two 10GbE ports. 20GbE is juuuust right because a single 20GbE port is enough bandwidth for all but the most I/O intensive supercomputing applications, and is available for a fraction of the price of 40GbE technology. According to the Cisco UCS B200 M4 Blade Server Spec Sheet all Cisco UCS 5108 midplane, FEX and FI network connectivity ports are currently 10GbE, including the 40Gb scalability port on the 6324 FI which must be split into multiple 10GbE ports. The HP BladeSystem provides 20GbE links between blade server adapters and the chassis interconnects, as well as inter-switch links. With HP Flex-20 technology, Ethernet network adapters deliver twice the bandwidth of 10Gb adapters, while reducing the management overhead associated with multiple 10Gb adapters. With 20Gb downlinks, HP Virtual Connect FlexFabric-20/40 F8 Modules offer more than twice the throughput of other 10Gb extenders and fabric interconnects. In addition, ports on the HP Virtual Connect FlexFabric-20/40 F8 Modules can be dynamically configured to support Ethernet, Fibre Channel, or FCoE. I/O Bandwidth
  • 10. Document # TECHBRIEF2013005 v19 November, 2014 Page 10 of 24 Oversubscription Almost no Oversubscription with HP BladeSystem Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch ports. The oversubscription ratio is the sum of the capacity of the adapter ports, divided by the capacity of the chassis interconnect ports. Below you can see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100% improvement in oversubscription when compared to Cisco. 8 ports x 10Gb from Mid-Plane x 2 IO Modules = 160 Gb 8 ports x 10Gb x 2 IO Modules = 160Gb 4 ports x 10Gb from VICs and 4 ports x 10Gb from expansion cards (80Gb) x 8 Servers = 640Gb 16 ports x 20Gb from Mid-plane to 4 x HP Virtual Connect Modules = 1,280Gb 4 HP Virtual Connect Modules. Each with 4 x 40Gb ports + 8 x 10Gb ports + 2 x20Gb ISL ports = 1,120Gb 2 ports x 20Gb from FLOM + 2 ports x 20Gb for Mezz. Card x 16 Servers = 1,280Gb HP BladeSystem: Oversubscription = 1.1:1 Cisco UCS: Oversubscription = 4:1
  • 11. Document # TECHBRIEF2013005 v19 November, 2014 Page 11 of 24 What Oversubscription Means Blade Server I/O Hits The Wall If you configured 80Gb of bandwidth per blade on both a Cisco UCS and HP BladeSystem, the Cisco 5108 chassis interconnects are oversubscribed with the second server. In contrast, fifteen HP blade servers can be configured before reaching the bandwidth limit of the HP BladeSystem c7000 Platinum chassis interconnects. Number of Blade Servers It Takes to Hit the Limit of Chassis Interconnect Bandwidth 1.12 Tb/s Chassis Interconnect Bandwidth 160Gb/s Chassis Interconnect Bandwidth Two fully configured UCS blade servers hit the limits of the 5108 fabric extenders (FEX). It takes fifteen fully configured HP ProLiant Gen 9 blade servers to hit the bandwidth limit of the HP FlexFabric Modules.
  • 12. Document # TECHBRIEF2013005 v19 November, 2014 Page 12 of 24 RDMA over Ethernet (RoCE) RDMA over Converged Ethernet (RoCE) InfiniBand networks were invented to overcome the need to plow through the Ethernet protocol stack to complete an I/O transaction. InfiniBand boosts performance by eliminating layers of the stack for Remote Direct Memory Access (RDMA). The Ethernet industry responded by developing an enhanced version of Ethernet called Converged Ethernet (CE), featuring Priority Flow Control which is necessary to support RDMA over Converged Ethernet (RoCE). Blade systems with switches supporting CE, and with NICs supporting RDMA, can deliver I/O with lower latency and less CPU usage than previous generations of CNAs. HP ProLiant Gen9 blade servers incorporate 20Gb FlexibleLOM NICs which are RDMA NICs. Cisco has introduced RDMA LOM and Mezz NICs called the VIC 1340 and VIC 1380, respectively. I/O Without RDMA I/O With RDMA
  • 13. Document # TECHBRIEF2013005 v19 November, 2014 Page 13 of 24 RoCE Blade Environment Networked Storage Killer Apps for RoCE A killer app for RoCE is SMB 3.0 file servers where users accessing shared storage experience the response time of local storage. File servers turbo-charged with RoCE are commercially available via two Windows Server 2012 features called SMB Multi-Channel and SMB Direct. With SMB Multichannel, SMB 3.0 automatically detects the RDMA capability and creates multiple RDMA connections for a single session. This allows SMB to use the high throughput, low latency and low CPU utilization offered by SMB Direct. HP FlexFabric 20Gb adapters (RDMA NICs) are certified by Microsoft for use in the killer app described above. As of 11/14/14 the VIC 1340 is not certified by Microsoft for SMB Direct. Three Hyper-V Clusters and One File Server Cluster Using RDMA In this diagram a single HP BladeSystem with HP 6125XLG Ethernet Blade Switches required to support RoCE, is a high performance environment for 3 app clusters and 1 file server cluster. Hyper-V automatically senses the presence of RDMA NICs, then use multi-channel communications to evacuate VMs in seconds, and uses direct memory access for higher I/O to shared storage inside the blade server.
  • 14. Document # TECHBRIEF2013005 v19 November, 2014 Page 14 of 24 Performance Benefits of RoCE IOPS, IOPS per Watt, and Response Time Better with RoCE In testing performed in a Windows Storage Server environment using SMB Direct and RoCE, we were able to demonstrate better performance, efficiency and response time compared to last generation technology. Server Power Efficiency (IOPs per Watt) The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, delivered 80% higher server power efficiency than adapters not using RoCE. Sequential Read Performance (IOPs) The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, provided 82% more IOPs than previous generation adapters without RoCE. Read I/O Response Time (Seconds) The HP FlexFabric 20Gb 2-port 650FLB Adapter with RoCE (Emulex OCe14102) , used with Windows Storage Server and SMB Direct, reduced I/O response time by 70% compared to NICs without SMB Direct capabilities.
  • 15. Document # TECHBRIEF2013005 v19 November, 2014 Page 15 of 24 The Cost Benefits of RoCE Offload Hardware Offload A key to achieving efficient use of processing power is adapter offload of networking protocols so that application server CPU cycles are not wasted on network protocol processing. Using a software initiator instead of hardware offload requires that every TCP/IP, FCoE, and iSCSI packet be sent over the PCI bus to the NIC. A constant PCI bus busy state can interfere with traffic to other devices on the PCI bus. The lack of offload can have a big impact on CPU utilization. For example, a single adapter running an iSCSI software initiator can utilize 30% of the server CPU for iSCSI protocol processing. Add more adapters and VMs, and more CPU is needed for network protocol processing. The lack of offload is expensive. The cost of 30% CPU utilization for a $20,000 server is $6,000 — a cost that can be easily avoided by simply deploying a network adapter with iSCSI offload. Cisco UCS 1300 Series VIC adapters support TCP, FCoE , NVGRE, VXLAN and RoCE offload. HP FlexFabric adapters add to that offload for iSCSI. It is worth noting that at the time this report was written, HP 20Gb adapter VXLAN offload is certified by VMware, while as of 11/14/14 the Cisco VIC 1340/1380 VXLAN offload does not appear on the VMware Compatibility Guide. The Lack of Offload Can be Expensive Cost of using server for protocol processing @ 30% CPU utilization There are a variety of different network protocols supported by adapters, and many are used simultaneously. The more protocol processing that is done in the adapter, the more of your server investment can be applied to applications - instead of network protocol processing.
  • 16. Document # TECHBRIEF2013005 v19 November, 2014 Page 16 of 24 Why it Matters IT consolidation is hugely important because it represents less hardware and simplified management. The utilization of storage media leaped when storage was configured in a SAN and could be shared by many servers. The utilization of physical servers dramatically increased when multiple virtual servers could be hosted on a single physical server. Similarly, network utilization increases when more network protocols can run on a single cable, adapter or switch. Consolidation Metrics There are two metrics for I/O consolidation: the convergence of network protocols, and the consolidation of cables into higher bandwidth links.  Network Convergence  Cable Consolidation Comparing I/O Consolidation 2 Convergence of Network Protocols 2.0 2014: Cluster/SDN Convergence 1.0 2008: LAN/SAN Convergence IP iSCSI Converged Ethernet (FCoE, Priority Flow Control) IP Converged Ethernet iSCSI FCoE RDMA NVGRE VXLAN At the Xeon E5-2600 inflection point, specialized adapters will no longer be needed to support RDMA. The new class of adapters will also support new tunneling protocols which are essential components of software defined data centers.
  • 17. Document # TECHBRIEF2013005 v19 November, 2014 Page 17 of 24 Wanted: One Blade Server Network for LANs, SANs, Cluster Networks and SDN A new best practice for data center managers is to converge traditional shared computing infrastructure with their growing infrastructure for distributed apps. This is made possible by a new generation of network adapters and switches with support for the RDMA, VXLAN and NVGRE protocols. Support for these protocols enables blade servers to converge LANs, SANs, Cluster networks and software defined networks (SDN) in a single environment. It also allows data center managers to use software defined data center tools. The HP 20Gb FlexibleLOM adapters supports stateless hardware offload of TCP, iSCSI and FCoE protocols for LAN/SAN convergence, as well as hardware offload of RDMA, VXLAN and NVGRE for efficient support of cluster and tunnel traffic. The Cisco VIC1340 supports all of the same protocols, with hardware offload for all of the above except iSCSI. Network Convergence 2.0 a Perfect Fit for a Webscale Private Cloud The added support for RDMA over Converged Ethernet, NVGRE and VXLAN allow one adapter port on a blade server to support four network environments. Hardware offload allow the blade server to use precious CPU resources for applications, instead of for network protocol processing. Network Convergence Shared Distributed SDN
  • 18. Document # TECHBRIEF2013005 v19 November, 2014 Page 18 of 24 A Single 40Gb Link Eliminates Cables for 40 x 1Gb Links or 4 x 10Gb Links Until recently, 40GbE was used mostly for inter-switch connectivity and in the core of the network. The availability of 40GbE ports on servers sitting on the edge of the network has presented the opportunity for IT pros to consolidate dozens of 1GbE links and handfuls of 10GbE links with a single cable. This is an area where the HP BladeSystem stands out. The Cisco UCS architecture makes extensive use of teaming of 10Gb ports to build uplinks with higher bandwidth. That means lots of cables. Even the 40Gb port on the UCS Mini must be split into four cables. In contrast, the HP Virtual Connect Modules on the HP BladeSystem include four 40GbE ports, which in the apple-to-apples comparison below reduced the number of cables needed from 24 to 2. Configuring Redundant 40Gb Uplinks for 16 Blade Servers This diagram shows an apples-to-apples comparison of a 16 blade servers configured with redundant connections between servers and switches, and redundant uplinks. Many more cables are needed in the Cisco UCS configuration because the switches are external, and because of the lack of 40Gb ports. Note the Cisco Mini has a 40Gb port but it can only be used in a 4 x 10GbE configuration. Cable Consolidation Cisco UCS (24 cables) HP (2 cables) 4 x 10Gb 1 x 40Gb
  • 19. Document # TECHBRIEF2013005 v19 November, 2014 Page 19 of 24 Why it Matters A new era of agility awaits IT organizations who implement cloud operating systems designed to manage multiple software defined data centers. Years required for a generation of hardware change will be replaced by months required to deploy a software update. A foundation for this capability is overlay networks with tunneling of L2 traffic across data centers using L3 networks. Support for tunneling protocols is embedded in a new class of network adapters making it easy for private cloud builders to integrate their servers into a cloud platform. Conversely, IT organizations want to continue using native Fibre Channel SANs and want the flexibility to choose “if” and “when” they converge LANs and SANs on Ethernet. I/O Flexibility Metrics There are two capabilities which are expected to effect I/O flexibility in Webscale private clouds.  More efficient delivery of tunnel traffic with hardware offload of tunnel protocol processing  Support for native Fibre Channel Comparing I/O Flexibility 3
  • 20. Document # TECHBRIEF2013005 v19 November, 2014 Page 20 of 24 Live Migrations a Killer App for VXLAN and NVGRE One of the most valuable functions of server virtualization is live migration. This function frees system administrators from the time-consuming and complex process of moving workloads to optimize performance or mitigate a hardware failure. However, moving VMs on different networks requires extensive network reconfiguration. IT organizations using data center infrastructure dispersed in public, private or hybrid clouds simply can’t configure all servers and VMs on one local network, and need a tunneling mechanism to extend live migrations. Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE ) are protocols for deploying overlay (virtual) networks on top of a Layer 3 networks. VXLAN and NVGRE are used to isolate apps and tenants in a cloud and migrate virtual machines across long distances. While VXLAN and NVGRE allow live migrations across racks and data centers. RoCE accelerates live migrations. In a Microsoft TechEd demo, migrating Windows Server 2012 to a like system takes just under 1 minute 26 seconds. Windows Server 2012R2 performed the same migration in just over 32 seconds. Then using RoCE during the live migration process combined with SMB Direct, it took just under 11 seconds, without utilizing added CPU resources. Overlay Network Tunnel Overlay Network Tunnel Tunneling Unlocks The Cloud Efficient use of the cloud requires protocols allowing the creation of virtual networks, and allowing Layer 2 network services to traverse Layer 3 networks without network configuration. Live Migrations Across the Cloud
  • 21. Document # TECHBRIEF2013005 v19 November, 2014 Page 21 of 24 Storage Networks Support for Native Fibre Channel Needed for I/O Flexibility Based on IT Brand Pulse surveys, 40% of IT organizations are not converging with FCoE. For the 40% of IT professionals who have been too busy to look at FCoE, or who say they have no plans to converge their LANs and SANs, parallel Ethernet and Fibre Channel infrastructure will be deployed. The modular design of blade servers make them inherently flexible. But not all blade server platforms are equal when it comes to hosting multiple heterogeneous virtualized workloads and delivering I/O flexibility. The Cisco UCS blade servers support Ethernet/FCoE connectivity. The flexible HP BladeSystem supports Ethernet/FCoE, SAS, InfiniBand and Fibre Channel connectivity. Wanted: Parallel Ethernet & Fibre Channel Networks In 2014, the prevalent data center network architecture remains a parallel network architecture, including a mix of specialized NIC, iSCSI, and Fibre Channel host adapters, as well as Ethernet and Fibre Channel switched fabrics. Cisco UCS blade servers support only Ethernet connectivity. Adoption of FCoE technology is required to access installed Fibre Channel resources.
  • 22. Document # TECHBRIEF2013005 v19 November, 2014 Page 22 of 24 Advantage HP Based on the Three The goal of this paper was to examine the features expected to differentiate the performance, consolidation and flexibility of Cisco UCS and HP BladeSystem in Webscale environments. In our review, the advantage goes to HP BladeSystem. The table below highlights key differences between the two blade systems. Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem in c7000 Chassis The Products Chassis Size 6U 10U Max. Blade Servers 8 16 Mid-plane Bandwidth 1.2Tb/s 7.16Tb/s Max. Embedded Switches 2 8 Support for native 20Gb Ethernet No Yes Support for native 40Gb Ethernet (not including 40Gb port used only in 4 x 10Gb mode) No Yes Support for native Fibre Channel No Yes Support for native InfiniBand No Yes Over subscription 4:1 1.1:1 Hardware offload: Fibre Channel over Ethernet (FCoE) Yes Yes iSCSI No Yes TCP offload engine (TOE) Yes Yes RoCE offload engine (ROE) Yes (not qualified with SMB Direct as of 11/14/14) Yes VXLAN offload engine (VOE) Yes (not qualified by VMware as of 11/14/14) Yes NVGRE offload engine (NOE) Yes Yes Source Cisco HP
  • 23. Document # TECHBRIEF2013005 v19 November, 2014 Page 23 of 24 HP ProLiant Gen9 Blade Server Designed for Workloads of the Future The HP ProLiant Gen9 Blade Server is designed for I/O flexibility with a choice of HP FlexFabric converged networking or parallel Ethernet and Fibre Channel networks. The HP ProLiant Gen9 Blade Server is also fully compliant with Windows Server 2012 Virtual Fibre Channel—an innovation that will play an important role in the virtualization of Tier-1 workloads with Microsoft Hyper-V. HP ProLiant Gen9 Blade Servers in a c7000 Enclosure HP Virtual Connect FlexFabric 20/40 F8 module supports “FlatSAN” direct connectivity to native Fibre Channel 3PAR storage at a lower cost than using Fibre Channel switches HP Virtual Connect FlexFabric 20/40 F8 module supports LAN, SAN, NAS, iSCSI and FCoE connectivity  Native Fibre Channel server adapter  Over 12 million ports shipped on this stack  Complete enterprise OS support  Ethernet LAN on Motherboard (LOM) or Mezz adapter  Dual 10/20GbE Ports  Supports LAN, NAS, iSCSI and FCoE connectivity  Supports RoCE for scale-out cluster connectivity.  Supports NVGRE and VXLAN for migrating VMs across the cloud. 718203-B21 HP LPe1605 16Gb Fibre Channel HBA HP FlexFabric 20Gb 2-port 650FLB Adapter
  • 24. Document # TECHBRIEF2013005 v19 November, 2014 Page 24 of 24 Resources Summary Infrastructure of the past is functionally defined and purpose-built. Servers are servers, networking is networking and storage is storage. These purpose-built devices are deployed with little ability to change the function as needs change. In the future, infrastructure needs to be more transformative, taking the shape of business demands. Potential power and flexibility is locked inside the aging Cisco UCS 5108 chassis which severely limits the use of new high-bandwidth networks and any network other than Ethernet/FCoE. The new HP BladeSystem answers the call with: • A new level of convergence which will allow for resources to be allocated at a very granular level, improving efficiencies and ensuring optimal performance as workload demands change. • Interfaces to the software-defined data center. HP ProLiant Gen9 blade servers possess the capability to respond to intelligent orchestration of infrastructure resources in real-time, as applications and user needs change. • A cloud-ready architecture ready to scale-out, agile, and always on. • Workload-optimized for traditional share-everything applications and new share-nothing applications. Related Links OCe14000 Test Report HP FlexFabric Adapters Provided by Emulex HP BladeSystem HP Virtual Connect Technology HP BladeSystem and Cisco UCS Comparison Cisco Fabric Extender Cisco UCS Virtual Interface Card 1340 Cisco UCS 6324 Fabric Interconnect Data Sheet Cisco UCS Ethernet Switching Modes IT Brand Pulse About the Author Joe Kimpler is a senior analyst responsible for IT Brand Pulse Labs. Joe’s team manages the delivery of technical services including hands-on testing, product reviews, total cost of ownership studies and product launch collateral. He has over 30 years of experience in information technology and has held senior engineering and marketing positions at Fujitsu, Rockwell Semiconductors, Quantum and QLogic. Joe holds an engineering degree from the University of Illinois and a MBA in marketing.