SlideShare une entreprise Scribd logo
1  sur  25
Télécharger pour lire hors ligne
Build the Optimal Mainframe Storage Architecture With
Hitachi Data Systems and Brocade
Why Choose an IBM®
FICON®
Switched Network?
DATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG
ON POWERFUL RELEVANT PERFORMANCE SOLUTION CLO
VIRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V
WHITEPAPER
By Bill Martin, Hitachi Data Systems
Stephen Guendert, PhD, Brocade
April 2014
WHITE PAPER 2
Contents
Executive Summary	 3
Introduction	 4
Why Networked FICON Storage Is Better Than Direct-Attached Storage	 4
Hitachi Virtual Storage Platform G1000	 4
Why Brocade Gen5 DCX 8510 Is the Best FICON Director	 4
An Ideal Pairing: Hitachi Virtual Storage Platform G1000 and
Brocade Gen5 DCX 8510 	 5
Why IT Should Choose Networked Storage for FICON Over
Direct-Attached Storage	 5
Technical Reasons for a Switched FICON Architecture	 5
Business Reasons for a Switched FICON Architecture	 10
Why Switched FICON: Summary 	 15
Hitachi Virtual Storage Platform G1000	 15
Scalability 	 16
Performance	 16
IBM 3390 and FICON Support	 16
Hitachi Dynamic Provisioning	 17
Hitachi Dynamic Tiering	 17
Hitachi Remote Replication	 18
Multiplatform Support	 20
Cost-Savings Efficiencies	 20
Brocade Gen5 DCX 8510 in Mainframe Environments	 21
Reliability, Availability and Serviceability	 21
Proactive Performance Management	 21
Scalability	 22
Pair the 2 Platforms Together	 23
Traditional (z/OS) Mainframe Environments	 23
Linux on the Mainframe	 23
FICON and FCP Intermix 	 23
Private Cloud	 23
Conclusion	 24
WHITE PAPER 3
Build the Optimal Mainframe Storage Architecture With Hitachi Data
Systems and Brocade
Executive Summary
The IBM®
System z®
and newer zEnterprise®
or, in other words, mainframes, continue to be a critical foundation in
the IT infrastructure of many large companies today. An important element of the mainframe environment is the disk
storage system (subsystem) that is connected to the mainframe via channels. The overall reliability, availability and
performance of mainframe-based applications are dependent on this storage system.
The performance demands, capacity, reliability, flexibility, efficiency and cost-effectiveness of the storage system
are important aspects of any storage acquisition and configuration decision. The increasing demands for improved
performance, that is, throughput (IOPS) and response time, make the storage system a critical element of the IT
infrastructure. Another key factor in configuring the storage system is the decision of how it should be connected
to the mainframe channels: direct attached to an IBM FICON®
channel or through a switched FICON network. This
decision impacts the flexibility, reliability and availability of the storage infrastructure and the efficiency of the storage
administrators.
Hitachi Virtual Storage Platform G1000 (VSP G1000) is a high-performance enterprise-class storage system that pro-
vides a comprehensive set of storage and data services. These provide mainframe users with a cost-effective, highly
reliable and available storage platform that delivers outstanding performance, capacity and scalability. VSP G1000
supports the operating systems used with IBM zEnterprise processors, including z/OS®
, z/VSE®
, z/VM®
, and Linux
on System z. This industry-leading storage series provides IBM 3390 disk drive support across a variety of disk drive
types to meet the variety of performance and capacity needs of mainframe environments. The platform provides
an internal physical disk capacity of approximately 5PB per storage system. With externally attached storage, VSP
G1000 can support up to 255PB of storage capacity. It supports 8Gb/sec FICON across all front-end ports for con-
nectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage.
Using a FICON network configured with a switch or director to connect a storage system to the mainframe channels
can significantly enhance reliability, flexibility and availability of storage systems. At the same time, it can maximize
storage performance and throughput. A switched FICON network allows the implementation of a fan-in, fan-out con-
figuration, which allows maximum resource utilization and simultaneously helps localize failures, improving availability.
The Brocade Gen5 DCX 8510 is a backbone-class FICON or Fibre Channel director. The Brocade Gen5 DCX 8510
family of FICON directors provides the industry's most powerful switching infrastructure for modern mainframe envi-
ronments. It provides the most reliable, scalable, efficient, cost-effective, high-performance foundation for today's
highly virtualized mainframe environments. The Brocade Gen5 DCX 8510 builds upon years of innovation and experi-
ence and leverages the core technology of Brocade systems, providing over 99.999% uptime in the world's most
demanding data centers. The Gen5 DCX 8510 supports the operating systems used with zEnterprise processors:
z/OS, z/VSE, z/VM, Linux on System z, and zTPF for System z. This industry-leading FICON director supports 2,
4, 8, 10, and 16Gb/sec Fibre Channel links, FICON I/O traffic, and 1 gigabit Ethernet (GbE) or 10GbE links on Fibre
Channel over IP (FCIP). At the same time, it provides 8.2Tb/sec chassis bandwidth.
The combination of switched FICON connectivity with Hitachi VSP G1000 connected to mainframe channels through
a Brocade Gen5 DCX 8510 Director provides a powerful, flexible and highly available solution. Together, they support
the storage features, performance and capacity needed for today's mainframe environments.
WHITE PAPER 4
Introduction
This paper explores both technical and business reasons for implementing a switched FICON architecture instead of
a direct-attached storage FICON architecture. It also explains why Hitachi Virtual Storage Platform G1000 and the
Brocade FICON Director together provide an outstanding, industry-leading solution for FICON environments.
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question
"Do I need FICON switching technology, or should I go with direct-attached storage?" is frequently asked. With up
to 320 FICON Express8S channels supported on an IBM zEnterprise z114, z196 and zEC12 and zBC12, why not
just direct-attach the control units? The short answer is that with all of the I/O improvements, switching technology
is needed — now more than ever. In fact, there are more reasons to use switched FICON than there were to use
switched IBM ESCON®
. Some of these reasons are purely technical; others are more business-related.
Why Networked FICON Storage Is Better Than Direct-Attached Storage
The raw bandwidth of FICON Express8S running on IBM zEnterprise systems is 40 times greater than the capabilities
of ESCON. The raw I/Os per second (IOPS) capacity of FICON Express8S channels is even more impressive, par-
ticularly when a channel program uses the z High Performance FICON (zHPF) protocol. To utilize these tremendous
improvements, the FICON protocol is packet-switched and, unlike ESCON, capable of having multiple I/Os occupy
the same channel, simultaneously.
FICON Express8S channels on zEnterprise processors can have up to 64 concurrent I/Os (open exchanges) to dif-
ferent devices. FICON Express8S channels running zHPF can have up to 750 concurrent I/Os on the zEnterprise
processor family. Only when a director or switch is used between the host and storage device can the true perfor-
mance potential of channel bandwidth and I/O processing gains be fully exploited.
Hitachi Virtual Storage Platform G1000
Hitachi Virtual Storage Platform G1000, with its vast functionality and throughput capability, is ideal for IBM mainframe
environments and provides a comprehensive set of storage and data services. The flexibility in configuring, partition-
ing and tiering storage, VSP G1000 easily supports mainframe environments, with multiple LPARS running multiple
operating system images in the same sysplex.
The packaging, enhanced features and improved manageability of VSP G1000 provide mainframe users with a cost-
effective, highly reliable and available storage platform. It delivers outstanding performance, capacity and scalability.
The storage platform easily supports both mainframe and open systems environments. For mainframe environments,
it supports z/OS, z/VSE, z/VM and zTPF. Many organizations are considering the benefits of running Linux on IBM
zEnterprise processors. VSP G1000 supports this capability for both count key device (CKD) and fixed-block archi-
tecture (FBA) disk formats and provides a solid foundation for implementing private clouds.
With support for FICON Express8S and the support of 2Gb, 4Gb and 8Gb FICON and 2Gb, 4Gb and 8Gb Fibre
Channel connectivity, this platform delivers industry-leading I/O performance. VSP G1000 can have up to 24 front-
end directors with a total of 176 FICON ports. Each port can support more IOPS than a single zEnterprise FICON
Express8 channel can deliver. As a result, it is ideally suited for connectivity to the mainframe through a switched
FICON network.
Why Brocade Gen5 DCX 8510 Is the Best FICON Director
Emerging and evolving enterprise-critical workloads and higher-density virtualization are continuing to push the
limits of SAN infrastructures. This is even more true in a data center with IBM zEnterprise and its support for
Microsoft®
Windows®
in the zEnterprise Blade Center Extension (zBX). The Brocade Gen5 DCX 8510 family features
WHITE PAPER 5
industry-leading 16Gb/sec performance, and 8.2Tb chassis bandwidth to address these next-generation I/O and
bandwidth-intensive application requirements. In addition, the Brocade Gen5 DCX 8510 provides unmatched slot-to-
slot and port performance, with 512Gb/sec bandwidth per slot (port card or blade). And this performance comes in
the most energy-efficient FICON director in the industry, using an average of less than 1 watt per Gb/sec, which is 15
times more efficient than competitive offerings.
The Brocade Gen5 DCX 8510 family enables high-speed replication and backup solutions over metro or WAN links
with native Fibre Channel (10Gb/sec or 16Gb/sec). FCIP 1GbE or 10GbE extension support is optional. These solu-
tions are accomplished by integrating this technology via a blade (FX24-8) or standalone switch (Brocade 7800).
Brocade Fabric Vision technology, an extension of Generation 5 Fibre Channel, has been introduced and qualified on
IBM System z with Brocade Fabric Operating System (FOS) 7.2. Fabric Vision provides a breakthrough hardware and
software diagnostic, monitoring and management solution that unleashes the full potential of high-density server and
storage virtualization, cloud architectures and next-generation storage.
Finally, this solution is accomplished with unsurpassed levels of reliability, availability and serviceability (RAS), based
upon more than 25 years of Brocade experience in the mainframe space. This experience includes defining FICON
standards and authoring or co-authoring many FICON patents.
An Ideal Pairing: Hitachi Virtual Storage Platform G1000 and Brocade Gen5 DCX 8510
The IBM zEnterprise architecture is the highest performing, most scalable, cost-effective, energy-efficient platform in
mainframe computing history. To get the most out of your investment in IBM zEnterprise, you need a storage infra-
structure, that is, a DASD platform and FICON director that can match the impressive capabilities of zEnterprise.
Hitachi Data Systems and Brocade, via VSP G1000 and Gen5 DCX 8510, offer the highest performing and most reli-
able, scalable, cost-effective and energy-efficient products in the storage and networking industry. The experience of
these 2 companies in the mainframe market, coupled with the capabilities of VSP G1000 and Gen5 DCX 8510, make
pairing them with IBM zEnterprise the ideal "best in industry" storage architecture for mainframe data centers.
Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage
Direct-attached FICON storage might appear to be a great way to take advantage of FICON technology. However, a
closer examination will show why a switched FICON architecture is a better, more robust design for enterprise data
centers than direct-attached FICON.
Technical Reasons for a Switched FICON Architecture
There are 5 key technical reasons for connecting storage control units using switched FICON:
■■ Overcome buffer credit limitations on FICON Express8 channels.
■■ Build fan-in, fan-out architecture designs for maximizing resource utilization.
■■ Localize failures for improved availability.
■■ Increase scalability and enable flexible connectivity for continued growth.
■■ Leverage new FICON technologies.
FICON Channel Buffer Credits
When IBM introduced the availability of FICON Express8 channels, one very important change was the number of
buffer credits available on each port per 4-port FICON Express8 channel card. While FICON Express4 channels had
200 buffer credits per port on a 4-port FICON Express4 channel card, this changed to 40 buffer credits per port on
a FICON Express8 channel card. Organizations familiar with buffer credits will recall that the number of buffer credits
WHITE PAPER 6
required for a given distance varies directly in a linear relationship with link speed. In other words, doubling the link
speed would double the number of buffer credits required to achieve the same performance at the same distance.
Also, organizations might recall the IBM System z10®
Statement of Direction concerning buffer credits:
"The FICON Express4 features are intended to be the last features to support extended distance
without performance degradation. IBM intends to not offer FICON features with buffer credits for
performance at extended distances. Future FICON features are intended to support up to 10km
without performance degradation. Extended distance solutions may include FICON directors or
switches (for buffer credit provision) or dense wavelength division multiplexers [DWDM] (for buffer
credit simulation)."
IBM held true to its statement, and the 40 buffer credits per port on a FICON Express8/FICON Express8S channel
card can support up to 10km of distance for full-frame size I/Os (2KB frames). What happens if an organization has
I/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It is likely that
at faster future link speeds, the distance supported will decrease to 5km or less.
A switched architecture allows organizations to overcome the buffer credit limitations on the FICON Express8/FICON
Express8S channel card. Depending upon the specific model, FICON directors and switches can have more than
1300 buffer credits available per port for long-distance connectivity.
Fan-In, Fan-Out Architecture Designs
In the late 1990s, the open systems world started to implement Fibre Channel storage area networks (SANs) to over-
come the low utilization of resources inherent in a direct-attached storage architecture. SANs addressed this issue
through the use of fan-in and fan-out storage network designs. That is, multiple server host bus adapters (HBAs)
could be connected through a Fibre Channel switch to a single storage port: in other words, fan-in. Or, a single-server
HBA could be connected through a Fibre Channel switch to multiple storage ports: that is, fan-out. These same prin-
ciples apply to a FICON storage network.
As a general rule, FICON Express8 and FICON Express8S channels offer different levels of performance, in terms of
IOPS and bandwidth, than the storage host adapter ports to which they are connected. Therefore, a direct-attached
FICON storage architecture may see very low channel or storage port utilization rates. To overcome this issue, fan-in
and fan-out storage network designs are used.
A switched FICON architecture allows a single channel to fan out to multiple storage devices via switching, improv-
ing overall resource utilization. This capability can be especially valuable if an organization's environment has newer
FICON channels, such as FICON Express8 or Express8S, but older tape drive technology. Figure 1 illustrates how
a single FICON channel can concurrently keep several tape drives running at full-rated speeds. The actual fan-out
ratios for connectivity to tape drives will, of course, depend on the specific tape drive and control unit; however, it is
not unusual to see a FICON Express8 or Express8S channel fan-out from a switch to 5 to 6 tape drives (a 1:5 or 1:6
fan-out ratio). The same principles apply for fan-out to storage systems. The exact fan-out ratio is dependent on the
storage system model and host adapter capabilities for IOPS and/or bandwidth. On the other hand, several FICON
channels could be connected through a director or switch to a single storage port to maximize the port utilization and
increase overall I/O efficiency and throughput.
WHITE PAPER 7
Figure 1. Switched FICON allows 1 channel to keep multiple tape drives fully utilized.
Keep Failures Localized
In a direct-attached architecture, a failure anywhere in the path renders both the channel interface and the control unit
port unusable. The failure could be of: an entire FICON channel card, a port on the channel card, the cable, the entire
storage host adapter card, or an individual port on the storage host adapter card. In other words, a failure on any of
these components will affect both the mainframe connection and the storage connection. The worst possible reliabil-
ity, availability and serviceability for FICON-attached storage are provided with a direct-attached architecture.
With a switched architecture, failures are localized to only the affected FICON channel interface or control unit inter-
face, not both. The nonfailing side remains available, and if the storage side has not failed, other FICON channels can
still access that host adapter port via the switch or director (see Figure 2). This failure isolation, combined with fan-in
and fan-out architectures, allows for the most robust storage architectures, minimizing downtime and maximizing
availability.
WHITE PAPER 8
Figure 2. A FICON director isolates faults and improves availability.
Scalable and Flexible Connectivity
Direct-attached FICON does not easily allow for dynamic growth and scalability, since a single FICON channel card
port is tied to a single dedicated storage host adapter port. In such an architecture, there is a 1:1 relationship (no
fan-in or fan-out). Since there is a finite number of FICON channels available (dependent on the mainframe model or
machine type), growth in a mainframe storage environment with such an architecture can pose a problem. What hap-
pens if an organization needs more FICON connectivity, but has run out of FICON channels? FICON switching and
proper usage of fan-in and fan-out in the storage architecture design will go a long way toward improving scalability.
In addition, best-practice storage architecture designs include room for growth. With a switched FICON architecture,
adding a new storage system or port in a storage system is much easier: Simply connect the new storage system
or port to the switch. This eliminates the need to open the channel cage in the mainframe to add new channel inter-
faces, reducing both capital and operational expenditures (capex and opex). This also gives managers more flexible
planning options when upgrades are necessary, since the urgency of upgrades is lessened.
What about the next generation of channels? The bandwidth capabilities of channels are growing at a much faster
rate than those of storage devices. As channel speeds increase, switches will allow data center managers to take
advantage of new technology as it becomes available, while protecting investments and minimizing costs.
Also, it is an IBM best-practice recommendation to use single-mode long-wave connections for FICON channels.
Storage vendors, however, often offer single-mode long-wave connections and multimode short-wave connec-
tions on their storage systems, allowing organizations to decide which to use. The organization makes the decision
based on the trade-off between cost and reliability. Some organizations' existing storage devices have a mix of
single-mode and multimode connections. Since they cannot directly connect a single-mode FICON channel to a
WHITE PAPER 9
multimode storage host adapter, this could pose a problem. With a FICON director or switch in the path, however,
organizations do not need to change the storage host adapter ports to comply with the single-mode best-practice
recommendation for the FICON channels. The FICON switching device can have both types of connectivity. It can
have single-mode long-wave ports for attaching the FICON channels, and multimode short-wave ports for attaching
the storage.
Furthermore, FICON switching elements at 2 different locations can be interconnected by fiber at distances up to
100km or more, creating a cascaded FICON switched architecture. This setup is typically used in disaster recovery
and business continuance architectures. As previously discussed, FICON switching allows resources to be shared.
With cascaded FICON switching, those resources can be shared between geographically separated locations,
allowing data to be replicated or tape backups to be made at the alternate site, away from the primary site, with no
performance loss. Often, workloads will be distributed such that both the local and remote sites are primary produc-
tion sites, and each site uses the other as its backup.
While the fiber itself is relatively inexpensive, laying new fiber may require an expensive construction project. While
DWDM can help get more out of fiber connections, inter-switch links with up to 16Gb/sec of bandwidth are offered
by switch vendors and can reduce the cost of or even eliminate the need for DWDM. FICON switches maximize utili-
zation of this valuable intersite fiber by allowing multiple environments to share the same fiber link. In addition, FICON
switching devices offer unique storage network management features, such as ISL trunking and preferred pathing,
which are not available with DWDM equipment.
FICON switches allow data center managers to further exploit intersite fiber sharing by enabling them to intermix
FICON and native Fibre Channel Protocol (FCP) traffic, which is known as Protocol Intermix Mode, or PIM. Even in
data centers where there is enough fiber to separate FICON and open systems traffic, preferred pathing features on
a FICON switch can be great cost savers. With preferred paths established, certain cross-site fiber can be allocated
for the mainframe environment, while other fiber can be allocated for open systems. The ISLs can be configured such
that in the event of a failure, and only in the event of an ISL failure, the links would be shared by both open systems
and mainframe traffic.
Leverage New Technologies
Over the past 5 years, IBM has announced a series of technology enhancements that require the use of switched
FICON. These include:
■■ N_port ID virtualization (NPIV) support for z Linux.
■■ FICON Dynamic Channel-Path Management (DCM).
■■ z/OS FICON Discovery and Auto-Configuration (zDAC).
IBM announced support for NPIV on z Linux in 2005. Today, NPIV is supported on the System z9®
, z10, z196, z114,
zEC12 and zBC12. Until NPIV was supported on System z, adoption of Linux on System z had been relatively slow.
NPIV allows for full support of LUN masking and zoning by virtualizing the Fibre Channel identifiers. This, in turn,
allows each Linux on System z image to appear as if it has its own individual HBA when those images are, in fact,
sharing FCP channels. Since IBM began supporting NPIV on System z, adoption of Linux on System z has grown
significantly. IBM believes approximately 24% of MIPS shipping on new zEnterprise processors are for Linux on
System z implementations. Implementation of NPIV on System z requires a switched architecture.
FICON DCM is another feature that requires a switched FICON architecture. FICON DCM provides the ability to have
System z automatically manage FICON I/O paths connected to storage systems in response to changing workload
demands. Use of FICON DCM helps simplify I/O configuration planning and definition, reduces the complexity of
WHITE PAPER 10
managing I/O, dynamically balances I/O channel resources, and enhances availability. FICON DCM can best be sum-
marized as a feature that allows for more flexible channel configurations, by designating channels as "managed," and
proactive performance management. FICON DCM requires a switched FICON architecture because topology infor-
mation is communicated via the switch or director. The FICON switch must have a control unit port (CUP) license and
be configured or defined as a control unit in the hardware configuration definition (HCD).
z/OS FICON Discovery and Auto-Configuration (zDAC) is the latest technology enhancement for FICON. IBM intro-
duced zDAC as a follow-on to an earlier enhancement in which the FICON channels log into the Fibre Channel name
server on a FICON director. zDAC enables the automatic discovery and configuration of FICON-attached DASD and
tape devices. Essentially, zDAC automates a portion of the HCD Sysgen process. zDAC uses intelligent analysis to
help validate the System z and storage definitions' compatibility, and uses built-in best practices to help configure
for high availability and avoid single points of failure. zDAC is transparent to existing configurations and settings. It is
invoked and integrated with the z/OS HCD and z/OS Hardware Configuration Manager (HCM). zDAC also requires a
switched FICON architecture.
IBM also introduced support for transport-mode FICON (known as z High Performance FICON, or zHPF) in
October 2008 and announced enhancements in July 2011. While not required for zHPF, a switched architecture is
recommended.
Business Reasons for a Switched FICON Architecture
In addition to the technical reasons described earlier, the following business reasons support implementing a
switched FICON architecture:
■■ Enable massive consolidation in order to reduce capex and opex.
■■ Improve application performance at long distances.
■■ Support growth and enable effective resource sharing.
Massive Consolidation
With NPIV support on System z, server and I/O consolidation is very compelling (see Figure 3). IBM undertook a
well-publicized project at its internal data centers (Project Big Green) and consolidated 3900 open systems servers
onto 30 System z mainframes running Linux. IBM's total cost of ownership (TCO) savings were calculated, taking into
account footprint reductions, power and cooling, and management simplification costs. The result was nearly 80%
TCO savings for a 5-year period. This scale of TCO savings is why 24% of new IBM mainframe processor shipments
are now being used for Linux.
Implementation of NPIV requires connectivity from the FICON (FCP) channel to a switching device (director or smaller
port-count switch) that supports NPIV. A special microcode load is installed on the FICON channel to enable it to
function as an FCP channel. NPIV allows the consolidation of up to 255 z Linux images ("servers") behind each FCP
channel, using 1 port on a channel card and 1 port on the attached switching device for connecting these virtual
servers. This enables massive consolidation of many HBAs, each attached to its own switch port in the SAN.
As a best practice, IBM currently recommends configuring no more than 32 Linux images per FCP channel. This
level of I/O consolidation was possible prior to NPIV support on System z. However, implementing LUN masking and
zoning in the same manner as with open systems servers, SAN and storage was not possible prior to the support for
NPIV with Linux System z.
NPIV implementation on System z has also resulted in consolidation and adoption of a common SAN for distributed
or open systems (FCP) and mainframe (FICON), commonly known as protocol intermix mode (PIM). While IBM has
WHITE PAPER 11
supported PIM in System z environments since 2003, adoption rates were low until NPIV implementations for Linux
for System Z picked up with the introduction of System z10 in 2008. With z10, enhanced segregation and security
beyond simple zoning was possible through switch partitioning or virtual fabrics and logical switches. With 24% of
new mainframes being shipped for use with Linux on System z, it is safe to say that at least 19% of mainframe envi-
ronments are now running a shared PIM environment.
Leveraging enhancements in switching technology, performance and management, PIM users can now fully populate
the latest high-density directors with minimal or no oversubscription. They can use management capabilities, such
as virtual fabrics or logical switches to fully isolate open systems ports and FICON ports in the same physical direc-
tor chassis. Rather than having more partially populated switching platforms that are dedicated to either mainframe
(FICON) or open systems (FCP), PIM allows for consolidation onto fewer physical switching devices. It reduces man-
agement complexity and improves resource utilization. This, in turn, leads to lower operating costs, and a lower TCO
for the storage network. It also allows for a consolidated, simplified cabling infrastructure.
Figure 3. Organizations implement NPIV to consolidate I/O in z Linux environments.
Application Performance Over Distance
As previously discussed, the number of buffer credits per port on a 4-port FICON Express8 channel has been
reduced to 40, supporting up to 10km without performance degradation. What happens if an organization needs to
go beyond 10km for a direct-attached storage configuration? They will likely see performance degradation due to
insufficient buffer credits. Without a sufficient quantity of buffer credits, the "pipe" cannot be kept full with streaming
frames of data.
WHITE PAPER 12
Switched FICON avoids this problem (see Figure 4). FICON directors and switches have a sufficient quantity of buffer
credits available on ports to allow them to stream frames at full-line performance rates with no bandwidth degrada-
tion. IT organizations that implement a cascaded FICON configuration between sites can, with the latest FICON
director platforms, stream frames at 16Gb/sec rates. And they experience no performance degradation for sites that
are 100km apart.
Switched FICON technology also allows organizations to take advantage of hardware-based FICON protocol accel-
eration or emulation techniques for tape (reads and writes). This emulation technology is available on standalone
extension switches or on a blade in FICON directors. It allows the z/OS-initiated channel programs to be acknowl-
edged locally at each site and avoids the back-and-forth protocol handshakes that normally travel between remote
sites. It also reduces the impact of latency on application performance and delivers local-like performance over unlim-
ited distances. In addition, this acceleration or emulation technology optimizes bandwidth utilization.
Why is bandwidth efficiency so important? It is typically the most expensive budget component in an organization's
multisite disaster recovery or business continuity architecture. Anything that can be done to improve the utilization
and/or reduce the bandwidth requirements between sites would likely lead to significant TCO savings.
Figure 4. Switched FICON with emulation allows optimized performance and bandwidth utilization over extended
distance.
WHITE PAPER 13
Enable Growth and Resource Sharing
Direct-attached storage forces a 1:1 relationship between host connectivity and storage connectivity. In other words,
each storage port on a storage system host adapter requires its own physical port connection on a FICON Express8
channel card. These channel cards are typically very expensive on a per-port basis: typically 4 to 6 times the cost of
a FICON director port. Also, there is a finite number of FICON Express8S channels available on a zEnterprise (a maxi-
mum of 320), as well as a finite number of host adapter ports in the storage system. If an organization has a large
configuration and a direct-attached FICON storage architecture, how does it plan to scale its environment? What
happens if an organization acquires a company and needs additional channel ports? A switched FICON infrastructure
allows cost-effective, seamless expansion to meet growth requirements.
Direct-attached FICON storage also typically results in underutilized host channel card ports and host adapter ports
in storage systems. FICON Express8 and FICON Express8S channels can comfortably perform at high-channel uti-
lization rates, and a direct-attached storage architecture typically sees channel utilization rates of 10% or less. As
illustrated in Figure 5, leveraging FICON directors or switches allows organizations to maximize channel utilization.
Figure 5. Switched FICON drives improved channel utilization, while preserving CHPIDs for growth.
WHITE PAPER 14
It also is very important to keep traffic for tape drives streaming, and to avoid stopping and starting the tape drives.
These actions lead to unwanted wear and tear of tape heads, cartridges and the tape media itself. Using FICON
acceleration or emulation techniques, as described earlier, streaming can be accomplished with a configuration simi-
lar to the one shown in Figure 6. Such a configuration requires solid analysis and planning, but it will pay dividends for
an organization's FICON tape environment.
Figure 6. A well-planned configuration can maximize CHPID capacity utilization for FICON tape efficiency.
Finally, switches facilitate fan-in, which allows different hosts and LPARs whose I/O subsystems are not shared to
share the same assets. While some benefits may be realized immediately, the potential for value in future equipment
planning can be even greater. With the ability to share assets, equipment that would be too expensive for a single
environment can be deployed in a cost-saving manner. The most common example is to replace tape farms with
virtual tape systems. By reducing the number of individual tape drives, maintenance (service contracts), floor space,
power, tape handling and cooling costs are reduced. Virtual tape also improves reliable data recovery, allows for sig-
nificantly shorter recovery time objectives (RTO) and nearer recovery point objectives (RPO), and offers features such
as peer-to-peer copies. However, without the ability to share these systems, it may be difficult to amass sufficient
cost savings to justify the initial cost of virtual tape. And the only practical way to share these standalone tape sys-
tems or tape libraries is through a switch.
With disk storage systems, in addition to sharing the asset, it is sometimes desirable to share the data across mul-
tiple systems. The port limitations on a storage system may prohibit or limit this capability using direct-attached
(point-to-point) FICON channels. Again, the switch can provide a solution to this issue.
Even when there is no need to share devices during normal production, this capability can be very valuable in the
event of a failure. Data sets stored on tape can quickly be read by CPUs picking up workload that is already attached
to the same switch as the tape drives. Similarly, data stored on a storage system can be available as soon as a fault
is determined.
Switch features, such as preconfigured port prohibit or allow matrix tables, can ensure that access intended only for a
disaster scenario is prohibited during normal production.
WHITE PAPER 15
Why Switched FICON: Summary
Direct-attached FICON might appear to be a great way to take advantage of FICON technology's advances over
ESCON. However, a closer examination shows that switched FICON, similar to switched ESCON, is a better, more
robust architecture for enterprise data centers. Switched FICON offers:
■■ Better utilization of host channels and their performance capabilities.
■■ Scalability to meet growth requirements.
■■ Improved reliability, problem isolation and availability.
■■ Flexible connectivity to support evolving infrastructures.
■■ More robust business continuity implementations via cascaded FICON.
■■ Improved distance connectivity, with improved performance over extended distances.
■■ New mainframe I/O technology enhancements such as NPIV, FICON DCM, zDAC and zHPF.
Switched FICON also provides many business advantages and potential cost savings, including:
■■ The ability to perform massive server, I/O and SAN consolidation, dramatically reducing capex and opex.
■■ Local-like application performance over any distance, allowing host and storage resources to reside wherever busi-
ness dictates.
■■ More effective resource sharing, improved utilization, reduced costs and improved recovery time.
The trend toward increased usage of Linux on System z is growing, and there are cost advantages of NPIV imple-
mentations and PIM SAN architectures. As a result, direct-attached storage in a mainframe environment is becoming
a thing of the past. Investments made in switches for disaster recovery and business continuance are likely to pay
the largest dividends. Having access to alternative resources and multiple paths to those resources can result in sig-
nificant savings in the event of a failure. The advantages of a switched FICON infrastructure are simply too great to
ignore.
Hitachi Virtual Storage Platform G1000
Hitachi Data Systems has over 25 years of experience supporting IBM mainframe environments. A large portion of
the installed base of Hitachi storage systems connects to IBM z/OS and S/390®
mainframes via ESCON and FICON
networks.
Hitachi Virtual Storage Platform G1000 builds on this experience and introduces new features and packaging to
improve performance while lowering TCO. It features lower power and cooling requirements, high-density packaging
based on industry-standard 19-inch racks and faster microprocessors. It also offers the choice of disk drives types,
including solid-state disk (SSD), Hitachi Accelerated Flash storage, serial attached SCSI (SAS) and nearline SAS.
This storage platform provides an industry-leading, reliable and highly available storage system for mainframes in IBM
z/OS environments: It supports z/OS, z/VSE, z/VM and z/TPFfor zEnterprise.
Many organizations are considering the benefits of or running Linux on IBM zEnterprise processors. VSP G1000
supports this capability for both CKD and FBA disk formats and provides a solid foundation for implementing private
clouds. Hitachi has implemented key performance features in support of these operating systems running on zEnter-
prise, including PAV, HyperPAV, z/HPF, Multiple Allegiance, MIDAW and priority I/O queuing. It also provides a unique
mainframe storage management solution to deliver functionally compatible extended address volumes (EAV) for z/OS,
data volume expansion (DVE), and IBM FlashCopy®
SE (with space efficiency capability).
WHITE PAPER 16
VSP G1000 is designed to be highly available and resilient. All critical components are implemented
in pairs. If a component fails, the paired component can take over the workload without an outage.
With its support of multiple RAID configurations, an organization's data is protected in event of a
disk drive problem. Additionally, VSP G1000 offers industry-leading replication software. Its support
of FlashCopy and FlashCopy SE provides the functionality of IBM z/OS Metro Mirror. And it repli-
cates to remote locations using Hitachi Universal Replicator (HUR), allowing copies of data can be
maintained locally and at remote locations. This practice ensures data availability in case the primary copy becomes
unusable or is not accessible.
Scalability
Hitachi Virtual Storage Platform G1000 can scale to provide increased performance, capacity, throughput and con-
nectivity while dynamically combining multiple units into a single logical system with shared resources. It can also
virtualize new and existing external storage systems. This scaling means that VSP G1000 can grow nondisruptively
to meet changing needs within the data center. It minimizes outages to extend the platform and enhance functional-
ity while providing flexibility in the configuration and choice of disk technology to meet the specific needs of each
environment.
VSP G1000 controller-based storage virtualization enables connectivity to and virtualization of external storage, which
can potentially extend the life of existing storage assets and reduce costs.
Virtualizing external storage:
■■ Enables the reuse of existing or legacy assets for less critical or accessed data.
■■ Simplifies management of external storage with common management and data protection for internal and external
storage.
■■ Supports the reuse of existing or legacy assets across data centers within a metro area network distance and
across global distances with replication capabilities of the scale up storage system.
Performance
Hitachi Virtual Storage Platform G1000 ushers in a new level of I/O throughput, response and scalability. It supports
of 8Gb FICON (FICON Express8 and FICON Express8S). It enables a single VSP G1000 FICON 8Gb port to handle
higher traffic rates that can be delivered by a single zEnterprise FICON Express8 or FICON Express8S channel. This
storage networking is critical to optimizing performance and maximizing throughput in mainframe environments.
IBM 3390 and FICON Support
This industry-leading storage system provides 3390 disk drive support through emulation across a variety of disk
drive types to meet the variety of performance and capacity needs of mainframe environments. The platform supports
SSD flash drives, providing ultra-high-speed response with capacities of 200GB and 400GB, Hitachi Accelerated
Flash (HAF) drives, providing 1.6TB and 3.2TB FMDs as well as 2.5-inch SAS drives, and nearline SAS drives. It can
control up to 65,280 logical volumes and provides an internal physical disk capacity of approximately 2.5PB per stor-
age system. With externally attached storage, Hitachi Virtual Storage Platform G1000 can support up to 255PB of
storage capacity.
VSP G1000 supports 8Gb/sec FICON (FICON Express8 and FICON Express8S) across all front-end ports for con-
nectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage. VSP G1000 supports
high-performance FICON (z/HPF) for z/OS. On the back end, it supports SAS, SATA, nearline SAS, and SSD and
HAF drives, which are connected using the SAS 2 protocol with 6GB/sec connectivity per back-end port.
LEARN MORE
Compatibility
With VSP
G1000
WHITE PAPER 17
Hitachi Dynamic Provisioning
Hitachi Dynamic Provisioning (HDP) for Mainframe optimizes performance through extremely wide striping and more
effective use of storage through thin provisioning (see Figure 7). In other words, it allocates storage to an application
without actually mapping the corresponding physical storage until it is used. This separation of allocation from physi-
cal mapping results in more effective use of physical storage with higher overall performance and rates of storage
utilization. Dynamic Provisioning also enables Dynamic Volume Expansion (DVE) of 3390 volumes and FlashCopy SE
for more efficient use of storage when creating local copies.
Figure 7. Hitachi Dynamic Provisioning for Mainframe optimizes performance.
Hitachi Dynamic Tiering
Hitachi Dynamic Tiering (HDT) for Mainframe, an extension of HDP for Mainframe, enables the automatic movement
of data between tiers. HDT for Mainframe provides an additional level of automated, optimized storage management
by managing data across a full range of storage tiers from high-performance storage to low-cost storage. HDT for
Mainframe moves highly accessed blocks of data to the highest tier storage and migrates less frequently accessed
data to the lowest tiers according to simple policies. With HDT for Mainframe, there can be up to 3 storage tiers rang-
ing from high-performance flash storage to low-cost storage, such as nearline SaS or virtualized external storage in
the same storage pool. Tier creation is automatic, based on configuration policies, including media type and speed,
RAID level and sustained I/O level requirements.
This solution significantly reduces the time storage administrators have to spend analyzing storage usage and man-
aging the movement of data to optimize performance and complements existing mainframe storage provisioning
processes, such as DFSMS. Existing SMS storage groups and ACS routines can be aligned to differently tiered stor-
age pools with pages of data being moved to the appropriate tier when needed rather than moving entire datasets.
WHITE PAPER 18
Hitachi is the 1st storage vendor to support virtualization of external storage behind an enterprise storage platform.
HDT provides the ability for the system to automatically move blocks of data within data sets, to the most appropriate
class of storage based on performance and access requirements.
Hitachi Tiered Storage Manager for Mainframe
Hitachi Tiered Storage Manager for Mainframe (HTSM) is a z/OS software management product for Hitachi Dynamic
Tiering for Mainframe that enables a user to control service levels based on performance and/or time to facilitate
meeting mainframe SLAs. HDT policies managed through HTSM have the capability to hold selected data to a speci-
fied tier, regardless of the access patterns. The policies can also be defined to to migrate selected data to a higher or
lower tier on a predictable scheduled basis.
HTSM enables Dynamic Tiering and Dynamic Tiering policies to be managed from the mainframe at the volume level
or through DFSMS tools and constructs with ISPF or, optionally, REXX scripts.
Hitachi Remote Replication
Business continuity is more important than ever in today's business environment as demonstrated through the natu-
ral disasters and physical intrusion and destruction of IT resources over the last few years. A loss of business-critical
data can force a company to its knees and even into bankruptcy. In addition, regulatory compliance requirements
demand a business continuity and disaster recovery plan and infrastructure to support that plan or face stiff fines and
business restrictions. Hitachi remote replication offerings provide the ability to copy critical data to off-site facilities
either within a metropolitan area and/or to distant remote locations. The combination of the enterprise-level Hitachi
Virtual Storage Platform G1000 with Brocade's solutions to extend and optimize fabric connectivity facilitates the
movement of your business-critical data over longer distances. Together, they enable and enhance your ability to sup-
port business continuity and disaster recovery.
Hitachi TrueCopy
Hitachi TrueCopy synchronous software provides a continuous, nondisruptive, host-independent remote data repli-
cation solution for disaster recovery or data migration over distances within the same metropolitan area. It provides
a no-data-loss, rapid-restart solution (see Figure 8). For enterprise environments, TrueCopy synchronous software
combined with Hitachi Universal Replicator on VSP G1000 allows for advanced 3-data-center configurations. This
combination includes consistency across up to 12 storage systems in each site for optimal data protection.
Figure 8. Hitachi TrueCopy synchronous supports business continuity and disaster recovery efforts.
WHITE PAPER 19
TrueCopy synchronous supports business continuity and disaster recovery efforts, improving business resilience. It
improves service levels by reducing planned and unplanned downtime of customer-facing applications. It enables fre-
quent, nondisruptive disaster recovery testing with an online copy of current and accurate production data. TrueCopy
synchronous can be seamlessly integrated into existing z/OS environments and controlled with familiar PPRC com-
mands or with Hitachi Business Continuity Manager software.
Hitachi Universal Replicator
Hitachi Universal Replicator provides asynchronous data replication across any distance for both internal VSP G1000
storage and external storage managed by VSP G1000 (see Figure 9). Universal Replicator provides enterprise-class
performance associated with storage system-based replication. At the same time, it provides resilient business conti-
nuity without the need for remote host involvement, or redundant servers or replication appliances.
Universal Replicator maintains the integrity of replicated copies without impacting processing, even when replication
network outages occur or optimal bandwidth is not available. When compared to traditional methods of storage-
system-based replication, Universal Replicator leverages performance-optimized disk-based journals, resulting in
significantly reduced cache utilization and increased bandwidth utilization.
Universal Replicator ensures availability of up-to-date copies of data in up to 3 dispersed locations by leveraging the
synchronous capabilities of Hitachi TrueCopy synchronous. In the event of a disaster at the primary data center, the
delta resync feature of Universal Replicator enables fast failover and restart of the application without loss of data,
whether at the local or remote data center.
Figure 9. Hitachi Universal Replicator ensures availability of current copies of data in up to 3 dispersed locations.
WHITE PAPER 20
Universal Replicator can be integrated into an IBM GDPS®
environment, providing a much more cost-effective and
complete recovery solution than the IBM alternative of z/OS Global Mirror (XRC). With Universal Replicator and
TrueCopy synchronous support of a 3-data-center replication solution, VSP G1000 supports delta resync, which is
similar to but more efficient than z/OS Metro or Global Mirror incremental resync.
VSP G1000 also supports IBM z/OS Basic HyperSwap®
, which is enabled by IBM Tivoli Productivity Center
for Replication for System z Basic Edition (TPC-R). TPC-R enables the administrator to develop a z/OS Basic
HyperSwap configuration using VSP G1000. Using VSP G1000, the organization can create a z/OS Basic
HyperSwap plan. Create a 2-data-center configuration with TrueCopy synchronous or a 3-data-center configuration
with TrueCopy synchronous, Universal Replicator and Business Continuity Manager. Initially, VSP G1000 will support
up to 12x12 (2DC) and 12x12x12 (3DC) systems at each site1
.
VSP G1000 with Universal Replicator will support a 4-data-center configuration and allow you to have 2 long asyn-
chronous data paths and 2 synchronous paths. This solution offers you the ability to create multiple copies of data in
many locations and reduce the impact of data migration.
Hitachi Business Continuity Manager
Hitachi Business Continuity Manager enables centralized, enterprise-wide replication management for IBM z/OS
mainframe environments. Through a single, consistent interface based on the Time Sharing Option/Interactive System
Productivity Facility (TSO/ISPF) it uses full-screen panels to automate Hitachi Universal Replicator, Hitachi TrueCopy
synchronous (including multisite topologies) and in-system Hitachi ShadowImage Replication software operations.
This software feature automates complex disaster recovery and planned outage functions, resulting in reduced recov-
ery times. It also enables advanced, 3-data-center disaster recovery configurations and extended consistency group
capabilities. Business Continuity Manager provides built-in capabilities for monitoring and managing critical perfor-
mance metrics and thresholds for proactive problem avoidance. It also delivers autodiscovery of enterprise-wide
storage configuration and replication objects, eliminating tedious, error-prone data entry that can cause outages.
Business Continuity Manager integrates with the Hitachi replication management framework, Hitachi Replication
Manager software, for replication monitoring and continuous operations in mainframe (and open system)
environments.
Multiplatform Support
Hitachi Virtual Storage Platform G1000 can support multiple operating systems at the same time. Although many
mainframe organizations have been reluctant to share their storage platforms with open systems servers, the need
to share storage is becoming more important: Organizations are implementing Linux on System z. In addition, the
introduction of IBM zEnterprise BladeCenter Extension (zBX) for mainframe processors enables Microsoft Windows
to operate as part of zEnterprise servers, VSP G1000 can be configured to facilitate the isolation of disparate types
of data. VSP G1000 easily supports organizations implementing private as well as some public clouds on zEnterprise
servers using either Linux on System z or Windows on zBX. Additionally, the FICON and Fibre Channel ports are
completely separate and help ensure that critical mainframe data cannot be accessed directly by open systems serv-
ers or clients.
Cost-Savings Efficiencies
This storage system is designed to lower TCO wherever possible. The physical packaging has been designed to
use standard-size racks and chassis. The internal layout supports front-to-back airflow, to facilitate the use of hot
1
Check with your HDS Representative for currently supported configurations.
WHITE PAPER 21
and cold aisles and maximize the efficiency of data center cooling. In combination with very fast processors, denser
packaging and smaller batteries, the physical floor space and the heating and cooling requirements result in very low
power per square foot (KVA/sq ft.). Opex is lower than previous systems thanks to denser packaging, blade archi-
tecture, low power memory, small form factor disks, SSD and flash-protected cache with its smaller batteries. Hitachi
Data Systems is committed to continuing to deliver more efficient packaging, resulting in more sustainable products.
Brocade Gen5 DCX 8510 in Mainframe Environments
Now on its 5th-generation (1G, 2G, 4G, 8G and 16G) of switching technology (Gen5), Brocade has the experience
to rely on. The company has been in the mainframe storage networking business for more than 20 years, as far back
as the parallel channel extension technology of the late 1980s. Brocade has a history of thought leadership. It has 4
of its own FICON patents, as well as 5 FICON joint patents with IBM on technologies, such as the FICON bridge card
and control unit port (CUP). Brocade helped IBM develop Fibre Connection (FICON), and in 2000 the 1st certified IBM
FICON network infrastructure, using 1Gb/sec ED5000 Directors, was deployed. Brocade has the only FICON archi-
tecture certification program (BCAF) in the industry. Brocade manufactured the 9032-5 ESCON director for IBM, and
pioneered ESCON channel extension emulation technology. Brocade has continued its heritage of mainframe storage
networking thought leadership with 9 generations of FICON directors. These products include the current industry-
leading FICON directors, such as the DCX and DCX 8510, and FICON channel extension, such as the Brocade 7800
and FX8-24 extension blade.
Reliability, Availability and Serviceability
The largest corporations in the world literally run their businesses on mainframes. Government institutions in many
countries worldwide also rely on the mainframe for their critical computing needs. RAS qualities for these mission-
critical environments are of the utmost importance. Mainframe practitioners in these organizations avoid risk at all
costs. They never want to suffer an unscheduled outage, and they want to minimize if not outright eliminate, sched-
uled or planned outages. Mainframes such as the IBM zEnterprise have historically been the rock-solid pillar in terms
of computing RAS. Mainframe practitioners have a history of creating I/O infrastructures that have "5 nines" availabil-
ity. For FICON channel connectivity to mainframe-attached storage, these same organizations have a requirement for
a FICON director platform that offers the same levels of RAS as the mainframe, itself. The Brocade Gen5 DCX 8510
is the ideal FICON director for these RAS requirements.
The Brocade Gen5 DCX 8510 FICON Director features a modular, high-availability architecture that supports these
mission-critical mainframe environments. The Brocade Gen5 DCX 8510 chassis has been engineered from incep-
tion for "5 nines" of availability by providing multiple fans (supporting hot aisle-cool aisle), multiple fan connectors,
dual-core blade internal connectivity, dual-control processors, dual power supplies, a passive backplane and dual-I/O
timing clocks. These features and the switching design of the Brocade Gen5 DCX 8510 result in leading mean time
between failure (MBTF) and mean time to recovery or repair (MTTR) numbers. In a recent study performed with a
sample size of 26,593 Brocade products, the average yearly downtime was .53 minutes per year, for an availability
rate of 99.99984%. It is this kind of availability that consistently leads OEM partners such as Hitachi Data Systems to
praise Brocade products for their quality.
Proactive Performance Management
Brocade Fabric Vision was developed as part of a continuing effort to improve overall application availability and
reduce complexity. It is an optional licensed suite of monitoring features in the Fabric Operating System (FOS) that
runs on the Fibre Channel switches and directors. Brocade Network Advisor, an easy-to-use interface, provides users
with set of tools to display and manage the Fabric Vision features. All Fabric Vision features can be displayed on the
Network Advisor dashboard. Brocade Fabric Vision requires FOS 7.2.0d for System z environments, and is managed
with Brocade Network Advisor 12.1.3.
WHITE PAPER 22
Brocade Fabric Vision technology provides a breakthrough hardware and software solution that maximizes uptime,
simplifies FICON SAN management and provides unprecedented visibility and insight across the storage network.
Offering innovative diagnostic, monitoring and management capabilities, Fabric Vision technology helps administra-
tors avoid problems, maximize application performance and reduce operational costs. IT organizations with large,
complex or highly virtualized data center environments often require advanced tools to help them more effectively
monitor and manage their storage infrastructure. Developed with these IT organizations in mind, Fabric Vision tech-
nology also includes several breakthrough diagnostic, monitoring and management capabilities. These capabilities
dramatically simplify day-to-day FICON SAN administration and provide unprecedented visibility across the storage
network.
Fabric Vision technology is tightly integrated with Brocade Network Advisor, providing customizable health and
performance dashboard views. These views help to pinpoint problems faster, simplify SAN configuration and man-
agement, and reduce operational costs. Through Brocade Network Advisor, administrators can:
■■ Quickly and easily configure and monitor data center fabrics based on Brocade's Monitoring and Alerting Policy
Suite (MAPS) groups and policies.
■■ Identify, monitor and analyze data and application flows to maximize performance.
■■ Reduce time spent on repetitive tasks by deploying MAPS policies and rules across the fabric, or multiple fabrics,
from a single dialog.
■■ Run diagnostic tests on optics and cables to quickly identify and isolate potential fabric issues.
■■ Automatically monitor and detect network congestion in the fabric, and identify which devices or hosts are
impacted by a bottlenecked port.
Scalability
With the advent of the zBX and the zEnterprise Unified Resource Manager, private cloud computing centered on the
IBM zEnterprise has emerged as a "hot topic." Cloud computing requires a highly scalable (hyper-scale) storage net-
working architecture to support it. Hyper-Scale Inter-Chassis Link (ICL) is a unique Brocade Gen5 DCX 8510 feature
that provides connectivity among 2 or more Brocade 8510-4 or 8510-8 chassis. This is the 2nd generation of ICL
technology from Brocade with optical QSFP (Quad Small Form Factor). The 1st generation used a copper connector.
Each ICL connects the core routing blades of two 8510 chassis and provides up to 64Gb/sec of throughput within a
single cable. The Brocade 8510-8 allows up to 32 QSFP ports, and the 8510-4 allows up to 16 QSFP ports to help
preserve switch ports for end devices.
This 2nd generation of Brocade optical ICL technology, based on QSFP technology, provides a number of benefits
to the organization. Brocade has improved ICL connectivity over the use of copper connectors by upgrading to an
optical form factor. With this improvement, Brocade has also increased the distance of the connection from 2 meters
to 50 meters. QSFP combines 4 cables into 1 cable per port, significantly reducing the number of ISL cables the
customer needs to run. Since the QSFP connections reside on the core blades within each 8510, they do not use up
connections on the slot line cards. This improvement frees up to 33% of the available ports for additional server and
storage connectivity.
Dual-chassis backbone topologies connected through low-latency ICL connections are ideal in a FICON environment.
The majority of FICON installations have switches that are connected in dual or triangular topologies, using ISLs to
meet the FICON requirement for low latency between switches. New 64Gb/sec QSFP based ICLs enable simpler,
flatter, low-latency chassis topologies spanning a distance of up to 50 meters with off-the-shelf cables. They reduce
interswitch cables by 75% and preserve 33% of front-end ports for servers and storage, leading to fewer cables and
more usable ports in a smaller footprint.
WHITE PAPER 23
Pair the 2 Platforms Together
Traditional (z/OS) Mainframe Environments
In a "traditional" z/OS mainframe environment, RAS, as well as performance are the key concerns to most orga-
nizations. These characteristics provide the stability for the mainframe-based applications, on which the largest
companies in the world run their businesses. Dr. Thomas E. Bell, winner of the Computer Measurement Group (CMG)
Michelson Award for lifetime achievement in the computer performance field, once famously commented that "all
CPUs wait at the same speed." Likewise, Dr. Steve Guendert, a CMG Board member has commented in his blog that
"The IBM zEnterprise is a hungry machine, and its users need to feed the I/O beast." Response time means money
in these environments. The ability to process transactions more rapidly provides companies a competitive advantage
in today's financial industry. Hitachi Virtual Storage Platform G1000 and Brocade DCX 8510, together, make sure the
"I/O beast is fed."
Linux on the Mainframe
A 2011 IDC report indicated that of all the mainframes being shipped, approximately 19% of the processing power
is intended for Linux. And IBM has been quoted as saying that 32% of IBM's zEnterprise installed base is running
integrated facility for Linux (IFL) specialty engines. Regardless of whether Linux is running as a guest under z/VM
or natively in an LPAR, it is an important trend that cannot be ignored. This trend has been growing since the 2005
introduction of support for NPIV on System z. IT organizations are realizing that there are significant cost savings to
be realized by moving to Linux on System z, and these cost savings are in terms of hardware acquisition, software
licensing and operational costs, such as power and cooling. Hitachi VSP G1000 and Brocade Gen5 DCX 8510 are
the ideal choice for these Linux environments. VSP G1000 offers very powerful virtualization, support for NPIV and
both Dynamic Provisioning and Dynamic Tiering. Brocade Gen5 DCX 8510 offers full support for NPIV, and its Virtual
Fabrics functionality allows for highly secure separation of the z/OS data traffic from the Linux traffic on the FICON
director.
FICON and FCP Intermix
FICON and FCP Intermix, or PIM is another growing trend in mainframe environments. Linux on System z has been
the major driver of this trend, as its very nature often leads to mainframe end users using FCP channels and FICON
channels on the mainframe. IBM's recent announcement and GA of support for Windows blade servers on the
zEnterprise Blade Center Extension (zBX) is likely to see even further acceptance of PIM as a storage networking
architecture. The virtualization, performance, scalability and tiering capabilities of Hitachi VSP G1000 make it an ideal
disk storage platform for a PIM storage architecture. The performance and virtual fabrics capabilities, coupled with
the immense number of open systems SAN experience at Brocade make the DCX 8510 the ideal director platform to
go along with VSP G1000 in a PIM architecture.
Private Cloud
The ideas behind cloud computing are well known to experienced mainframers, who remember "service bureau com-
puting." Private cloud computing is a "hot topic." It is seeing a lot of adoption, and the concept of IBM zEnterprise
Systems at the center of a private cloud is gaining a lot of traction. Private cloud computing relies on extensive virtu-
alization. This virtualization is not just at the server and application; it is at everything in the data center, most notably
with the storage devices and the network. Hitachi Virtual Storage Platform G1000 paired with Brocade Gen5 DCX
8510 creates the ideal architecture for a mainframe-centric private cloud.
WHITE PAPER 24
Conclusion
A networked FICON storage architecture for your mainframe is a well-documented industry best practice for a wide
variety of reasons, both technical and financial. Networked storage architectures beat direct-attached architectures in
terms of RAS, performance, scalability and long-run costs. The latest I/O enhancements to IBM mainframes, such as
Dynamic Channel-path Management (DCM) and System z Discovery and Configuration (zDAC), require a networked
storage architecture (with FICON directors) if the end user wishes to take advantage of them.
The IBM zEnterprise offers unprecedented performance, scalability and innovative new features, such as the zBX,
as well as support for Windows. To take full advantage of a zEnterprise requires the end user to have an equally
capable storage system and FICON director platform for connectivity. Hitachi Virtual Storage Platform G1000 paired
with Brocade Gen5 DCX 8510 is the ideal combination with zEnterprise mainframes. It is ideal whether intended for a
traditional z/OS, Linux, PIM or private cloud environment. Hitachi Data Systems and Brocade have the experience to
rely on VSP G1000 and DCX 8510, the best platforms in the industry for mainframe data centers.
© Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Universal Storage Platform, ShadowImage and TrueCopy
are trademarks or registered trademarks of Hitachi Data Systems Corporation. IBM, FICON, ESCON, System z, z/OS, zEnterprise, z/VM, z9, z10, s/390, z/VSE, FlashCopy, XRC, GDPS,
HyperSwap and DS8000 are trademarks or registered trademarks of International Business Machines. Microsoft and Windows are trademarks or registered trademarks of Microsoft
Corporation. All other trademarks, service marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by
Hitachi Data Systems Corporation.
WP-432-D DG April 2014
Corporate Headquarters
2845 Lafayette Street
Santa Clara, CA 95050-2639 USA
www.HDS.com community.HDS.com
Regional Contact Information
Americas: +1 408 970 1000 or info@hds.com
Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com
Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com

Contenu connexe

Tendances

Fortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyperFortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyperEmilio Billi
 
Gemini launch release
Gemini launch releaseGemini launch release
Gemini launch releasedatadonna
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 PresentationRamanDua
 
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePresentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePRAGMA PROGETTI
 
InfiniBand boosts transaction speeds for Mega Retail Company
InfiniBand boosts transaction speeds for Mega Retail CompanyInfiniBand boosts transaction speeds for Mega Retail Company
InfiniBand boosts transaction speeds for Mega Retail CompanySukhbir Singh
 
Optimized packet processing software for networking and security
Optimized packet processing software for networking and securityOptimized packet processing software for networking and security
Optimized packet processing software for networking and securityPaul Stevens
 
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...Principled Technologies
 
Presentation Portfolio
Presentation PortfolioPresentation Portfolio
Presentation PortfolioSteve Lee
 
Adaptec by PMC Series 7 Adapters
Adaptec by PMC Series 7 AdaptersAdaptec by PMC Series 7 Adapters
Adaptec by PMC Series 7 AdaptersAdaptec by PMC
 
A switching storage world _ InfoWorld
A switching storage world _ InfoWorldA switching storage world _ InfoWorld
A switching storage world _ InfoWorldThomas Hughes
 

Tendances (13)

Fortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyperFortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyper
 
Gemini launch release
Gemini launch releaseGemini launch release
Gemini launch release
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 Presentation
 
IBM Pure Systems
IBM Pure SystemsIBM Pure Systems
IBM Pure Systems
 
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePresentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
 
InfiniBand boosts transaction speeds for Mega Retail Company
InfiniBand boosts transaction speeds for Mega Retail CompanyInfiniBand boosts transaction speeds for Mega Retail Company
InfiniBand boosts transaction speeds for Mega Retail Company
 
IBM System x3850 M2 / x3950 M2
IBM System x3850 M2 / x3950 M2IBM System x3850 M2 / x3950 M2
IBM System x3850 M2 / x3950 M2
 
Power overview 2018 08-13b
Power overview 2018 08-13bPower overview 2018 08-13b
Power overview 2018 08-13b
 
Optimized packet processing software for networking and security
Optimized packet processing software for networking and securityOptimized packet processing software for networking and security
Optimized packet processing software for networking and security
 
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...
Blade server SSD performance comparison: IBM Flex System x240 vs. the HP ProL...
 
Presentation Portfolio
Presentation PortfolioPresentation Portfolio
Presentation Portfolio
 
Adaptec by PMC Series 7 Adapters
Adaptec by PMC Series 7 AdaptersAdaptec by PMC Series 7 Adapters
Adaptec by PMC Series 7 Adapters
 
A switching storage world _ InfoWorld
A switching storage world _ InfoWorldA switching storage world _ InfoWorld
A switching storage world _ InfoWorld
 

Similaire à Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architecture White Paper

Build the Optimal Mainframe Storage Architecture
Build the Optimal Mainframe Storage ArchitectureBuild the Optimal Mainframe Storage Architecture
Build the Optimal Mainframe Storage ArchitectureHitachi Vantara
 
Connecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkConnecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkIBM India Smarter Computing
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage SessionBrocade
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
 
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM India Smarter Computing
 
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and Flexibility
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and FlexibilityMarvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and Flexibility
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and FlexibilityMarvell
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cTony Pearson
 
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignIBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignStefan Lein
 
Higher Speed, Higher Density, More Flexible SAN Switching
Higher Speed, Higher Density, More Flexible SAN SwitchingHigher Speed, Higher Density, More Flexible SAN Switching
Higher Speed, Higher Density, More Flexible SAN SwitchingTony Antony
 
Unified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesUnified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesIBM System Networking
 
Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Null00
 
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi Vantara
 
Creating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OCreating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OEmulex Corporation
 
Cisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex updateCisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex updateCisco Canada
 
Presentation cube
Presentation cubePresentation cube
Presentation cubeJP Pérusse
 
IBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricIBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricAngel Villar Garea
 

Similaire à Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architecture White Paper (20)

Build the Optimal Mainframe Storage Architecture
Build the Optimal Mainframe Storage ArchitectureBuild the Optimal Mainframe Storage Architecture
Build the Optimal Mainframe Storage Architecture
 
Connecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the NetworkConnecting an IBM PureFlex System to the Network
Connecting an IBM PureFlex System to the Network
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
 
IBM PureSystems
IBM PureSystemsIBM PureSystems
IBM PureSystems
 
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010
 
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and Flexibility
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and FlexibilityMarvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and Flexibility
Marvell QLogic 2600 Series 16Gb Gen 5 FC HBAs Double Performance and Flexibility
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905c
 
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by DesignIBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
 
Higher Speed, Higher Density, More Flexible SAN Switching
Higher Speed, Higher Density, More Flexible SAN SwitchingHigher Speed, Higher Density, More Flexible SAN Switching
Higher Speed, Higher Density, More Flexible SAN Switching
 
Unified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network TechnologiesUnified Fabric Architecture from BLADE Network Technologies
Unified Fabric Architecture from BLADE Network Technologies
 
DS8800 Client Presentation
DS8800 Client PresentationDS8800 Client Presentation
DS8800 Client Presentation
 
Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011Netcloud Breakfast Event Mai 2011
Netcloud Breakfast Event Mai 2011
 
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
Hitachi white-paper-ibm-mainframe-storage-compatibility-and-innovation-quick-...
 
Creating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OCreating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/O
 
Cisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex updateCisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex update
 
Presentation cube
Presentation cubePresentation cube
Presentation cube
 
IBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricIBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect Fabric
 
IBM PureFlex System
IBM PureFlex SystemIBM PureFlex System
IBM PureFlex System
 
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
20230614 LinuxONE Distinguished_Recognition ISSIP_Award_Talk.pptx
 

Plus de Hitachi Vantara

Webinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartWebinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartHitachi Vantara
 
Hyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHitachi Vantara
 
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsPowering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsHitachi Vantara
 
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
 
Virtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationVirtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationHitachi Vantara
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) Hitachi Vantara
 
Cloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicCloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicHitachi Vantara
 
Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceHitachi Vantara
 
Economist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudEconomist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
 
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
 
Information Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsInformation Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
 
Redefine Your IT Future With Continuous Cloud Infrastructure
Redefine Your IT Future With Continuous Cloud InfrastructureRedefine Your IT Future With Continuous Cloud Infrastructure
Redefine Your IT Future With Continuous Cloud InfrastructureHitachi Vantara
 
Hu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHitachi Vantara
 
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicDefine Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicHitachi Vantara
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi Vantara
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...Hitachi Vantara
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperHitachi Vantara
 
HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitachi Vantara
 
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Hitachi Vantara
 
The Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperThe Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperHitachi Vantara
 

Plus de Hitachi Vantara (20)

Webinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City SmartWebinar: What Makes a Smart City Smart
Webinar: What Makes a Smart City Smart
 
Hyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital TransformationHyperconverged Systems for Digital Transformation
Hyperconverged Systems for Digital Transformation
 
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data SystemsPowering the Enterprise Cloud with CSC and Hitachi Data Systems
Powering the Enterprise Cloud with CSC and Hitachi Data Systems
 
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...
 
Virtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview PresentationVirtual Infrastructure Integrator Overview Presentation
Virtual Infrastructure Integrator Overview Presentation
 
HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol) HDS and VMware vSphere Virtual Volumes (VVol)
HDS and VMware vSphere Virtual Volumes (VVol)
 
Cloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards InfographicCloud Adoption, Risks and Rewards Infographic
Cloud Adoption, Risks and Rewards Infographic
 
Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud Experience
 
Economist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation CloudEconomist Intelligence Unit: Preparing for Next-Generation Cloud
Economist Intelligence Unit: Preparing for Next-Generation Cloud
 
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...HDS Influencer Summit 2014: Innovating with Information to Address Business N...
HDS Influencer Summit 2014: Innovating with Information to Address Business N...
 
Information Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research ResultsInformation Innovation Index 2014 UK Research Results
Information Innovation Index 2014 UK Research Results
 
Redefine Your IT Future With Continuous Cloud Infrastructure
Redefine Your IT Future With Continuous Cloud InfrastructureRedefine Your IT Future With Continuous Cloud Infrastructure
Redefine Your IT Future With Continuous Cloud Infrastructure
 
Hu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On WorldHu Yoshida's Point of View: Competing In An Always On World
Hu Yoshida's Point of View: Competing In An Always On World
 
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist InfographicDefine Your Future with Continuous Cloud Infrastructure Checklist Infographic
Define Your Future with Continuous Cloud Infrastructure Checklist Infographic
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White Paper
 
HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution Profile
 
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
Use Case: Large Biotech Firm Expands Data Center and Reduces Overheating with...
 
The Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White PaperThe Next Evolution in Storage Virtualization Management White Paper
The Next Evolution in Storage Virtualization Management White Paper
 

Dernier

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 

Dernier (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 

Hitachi Data Systems and Brocade Build the Optimal Mainframe Storage Architecture White Paper

  • 1. Build the Optimal Mainframe Storage Architecture With Hitachi Data Systems and Brocade Why Choose an IBM® FICON® Switched Network? DATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG ON POWERFUL RELEVANT PERFORMANCE SOLUTION CLO VIRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V WHITEPAPER By Bill Martin, Hitachi Data Systems Stephen Guendert, PhD, Brocade April 2014
  • 2. WHITE PAPER 2 Contents Executive Summary 3 Introduction 4 Why Networked FICON Storage Is Better Than Direct-Attached Storage 4 Hitachi Virtual Storage Platform G1000 4 Why Brocade Gen5 DCX 8510 Is the Best FICON Director 4 An Ideal Pairing: Hitachi Virtual Storage Platform G1000 and Brocade Gen5 DCX 8510 5 Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage 5 Technical Reasons for a Switched FICON Architecture 5 Business Reasons for a Switched FICON Architecture 10 Why Switched FICON: Summary 15 Hitachi Virtual Storage Platform G1000 15 Scalability 16 Performance 16 IBM 3390 and FICON Support 16 Hitachi Dynamic Provisioning 17 Hitachi Dynamic Tiering 17 Hitachi Remote Replication 18 Multiplatform Support 20 Cost-Savings Efficiencies 20 Brocade Gen5 DCX 8510 in Mainframe Environments 21 Reliability, Availability and Serviceability 21 Proactive Performance Management 21 Scalability 22 Pair the 2 Platforms Together 23 Traditional (z/OS) Mainframe Environments 23 Linux on the Mainframe 23 FICON and FCP Intermix 23 Private Cloud 23 Conclusion 24
  • 3. WHITE PAPER 3 Build the Optimal Mainframe Storage Architecture With Hitachi Data Systems and Brocade Executive Summary The IBM® System z® and newer zEnterprise® or, in other words, mainframes, continue to be a critical foundation in the IT infrastructure of many large companies today. An important element of the mainframe environment is the disk storage system (subsystem) that is connected to the mainframe via channels. The overall reliability, availability and performance of mainframe-based applications are dependent on this storage system. The performance demands, capacity, reliability, flexibility, efficiency and cost-effectiveness of the storage system are important aspects of any storage acquisition and configuration decision. The increasing demands for improved performance, that is, throughput (IOPS) and response time, make the storage system a critical element of the IT infrastructure. Another key factor in configuring the storage system is the decision of how it should be connected to the mainframe channels: direct attached to an IBM FICON® channel or through a switched FICON network. This decision impacts the flexibility, reliability and availability of the storage infrastructure and the efficiency of the storage administrators. Hitachi Virtual Storage Platform G1000 (VSP G1000) is a high-performance enterprise-class storage system that pro- vides a comprehensive set of storage and data services. These provide mainframe users with a cost-effective, highly reliable and available storage platform that delivers outstanding performance, capacity and scalability. VSP G1000 supports the operating systems used with IBM zEnterprise processors, including z/OS® , z/VSE® , z/VM® , and Linux on System z. This industry-leading storage series provides IBM 3390 disk drive support across a variety of disk drive types to meet the variety of performance and capacity needs of mainframe environments. The platform provides an internal physical disk capacity of approximately 5PB per storage system. With externally attached storage, VSP G1000 can support up to 255PB of storage capacity. It supports 8Gb/sec FICON across all front-end ports for con- nectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage. Using a FICON network configured with a switch or director to connect a storage system to the mainframe channels can significantly enhance reliability, flexibility and availability of storage systems. At the same time, it can maximize storage performance and throughput. A switched FICON network allows the implementation of a fan-in, fan-out con- figuration, which allows maximum resource utilization and simultaneously helps localize failures, improving availability. The Brocade Gen5 DCX 8510 is a backbone-class FICON or Fibre Channel director. The Brocade Gen5 DCX 8510 family of FICON directors provides the industry's most powerful switching infrastructure for modern mainframe envi- ronments. It provides the most reliable, scalable, efficient, cost-effective, high-performance foundation for today's highly virtualized mainframe environments. The Brocade Gen5 DCX 8510 builds upon years of innovation and experi- ence and leverages the core technology of Brocade systems, providing over 99.999% uptime in the world's most demanding data centers. The Gen5 DCX 8510 supports the operating systems used with zEnterprise processors: z/OS, z/VSE, z/VM, Linux on System z, and zTPF for System z. This industry-leading FICON director supports 2, 4, 8, 10, and 16Gb/sec Fibre Channel links, FICON I/O traffic, and 1 gigabit Ethernet (GbE) or 10GbE links on Fibre Channel over IP (FCIP). At the same time, it provides 8.2Tb/sec chassis bandwidth. The combination of switched FICON connectivity with Hitachi VSP G1000 connected to mainframe channels through a Brocade Gen5 DCX 8510 Director provides a powerful, flexible and highly available solution. Together, they support the storage features, performance and capacity needed for today's mainframe environments.
  • 4. WHITE PAPER 4 Introduction This paper explores both technical and business reasons for implementing a switched FICON architecture instead of a direct-attached storage FICON architecture. It also explains why Hitachi Virtual Storage Platform G1000 and the Brocade FICON Director together provide an outstanding, industry-leading solution for FICON environments. With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct-attached storage?" is frequently asked. With up to 320 FICON Express8S channels supported on an IBM zEnterprise z114, z196 and zEC12 and zBC12, why not just direct-attach the control units? The short answer is that with all of the I/O improvements, switching technology is needed — now more than ever. In fact, there are more reasons to use switched FICON than there were to use switched IBM ESCON® . Some of these reasons are purely technical; others are more business-related. Why Networked FICON Storage Is Better Than Direct-Attached Storage The raw bandwidth of FICON Express8S running on IBM zEnterprise systems is 40 times greater than the capabilities of ESCON. The raw I/Os per second (IOPS) capacity of FICON Express8S channels is even more impressive, par- ticularly when a channel program uses the z High Performance FICON (zHPF) protocol. To utilize these tremendous improvements, the FICON protocol is packet-switched and, unlike ESCON, capable of having multiple I/Os occupy the same channel, simultaneously. FICON Express8S channels on zEnterprise processors can have up to 64 concurrent I/Os (open exchanges) to dif- ferent devices. FICON Express8S channels running zHPF can have up to 750 concurrent I/Os on the zEnterprise processor family. Only when a director or switch is used between the host and storage device can the true perfor- mance potential of channel bandwidth and I/O processing gains be fully exploited. Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000, with its vast functionality and throughput capability, is ideal for IBM mainframe environments and provides a comprehensive set of storage and data services. The flexibility in configuring, partition- ing and tiering storage, VSP G1000 easily supports mainframe environments, with multiple LPARS running multiple operating system images in the same sysplex. The packaging, enhanced features and improved manageability of VSP G1000 provide mainframe users with a cost- effective, highly reliable and available storage platform. It delivers outstanding performance, capacity and scalability. The storage platform easily supports both mainframe and open systems environments. For mainframe environments, it supports z/OS, z/VSE, z/VM and zTPF. Many organizations are considering the benefits of running Linux on IBM zEnterprise processors. VSP G1000 supports this capability for both count key device (CKD) and fixed-block archi- tecture (FBA) disk formats and provides a solid foundation for implementing private clouds. With support for FICON Express8S and the support of 2Gb, 4Gb and 8Gb FICON and 2Gb, 4Gb and 8Gb Fibre Channel connectivity, this platform delivers industry-leading I/O performance. VSP G1000 can have up to 24 front- end directors with a total of 176 FICON ports. Each port can support more IOPS than a single zEnterprise FICON Express8 channel can deliver. As a result, it is ideally suited for connectivity to the mainframe through a switched FICON network. Why Brocade Gen5 DCX 8510 Is the Best FICON Director Emerging and evolving enterprise-critical workloads and higher-density virtualization are continuing to push the limits of SAN infrastructures. This is even more true in a data center with IBM zEnterprise and its support for Microsoft® Windows® in the zEnterprise Blade Center Extension (zBX). The Brocade Gen5 DCX 8510 family features
  • 5. WHITE PAPER 5 industry-leading 16Gb/sec performance, and 8.2Tb chassis bandwidth to address these next-generation I/O and bandwidth-intensive application requirements. In addition, the Brocade Gen5 DCX 8510 provides unmatched slot-to- slot and port performance, with 512Gb/sec bandwidth per slot (port card or blade). And this performance comes in the most energy-efficient FICON director in the industry, using an average of less than 1 watt per Gb/sec, which is 15 times more efficient than competitive offerings. The Brocade Gen5 DCX 8510 family enables high-speed replication and backup solutions over metro or WAN links with native Fibre Channel (10Gb/sec or 16Gb/sec). FCIP 1GbE or 10GbE extension support is optional. These solu- tions are accomplished by integrating this technology via a blade (FX24-8) or standalone switch (Brocade 7800). Brocade Fabric Vision technology, an extension of Generation 5 Fibre Channel, has been introduced and qualified on IBM System z with Brocade Fabric Operating System (FOS) 7.2. Fabric Vision provides a breakthrough hardware and software diagnostic, monitoring and management solution that unleashes the full potential of high-density server and storage virtualization, cloud architectures and next-generation storage. Finally, this solution is accomplished with unsurpassed levels of reliability, availability and serviceability (RAS), based upon more than 25 years of Brocade experience in the mainframe space. This experience includes defining FICON standards and authoring or co-authoring many FICON patents. An Ideal Pairing: Hitachi Virtual Storage Platform G1000 and Brocade Gen5 DCX 8510 The IBM zEnterprise architecture is the highest performing, most scalable, cost-effective, energy-efficient platform in mainframe computing history. To get the most out of your investment in IBM zEnterprise, you need a storage infra- structure, that is, a DASD platform and FICON director that can match the impressive capabilities of zEnterprise. Hitachi Data Systems and Brocade, via VSP G1000 and Gen5 DCX 8510, offer the highest performing and most reli- able, scalable, cost-effective and energy-efficient products in the storage and networking industry. The experience of these 2 companies in the mainframe market, coupled with the capabilities of VSP G1000 and Gen5 DCX 8510, make pairing them with IBM zEnterprise the ideal "best in industry" storage architecture for mainframe data centers. Why IT Should Choose Networked Storage for FICON Over Direct-Attached Storage Direct-attached FICON storage might appear to be a great way to take advantage of FICON technology. However, a closer examination will show why a switched FICON architecture is a better, more robust design for enterprise data centers than direct-attached FICON. Technical Reasons for a Switched FICON Architecture There are 5 key technical reasons for connecting storage control units using switched FICON: ■■ Overcome buffer credit limitations on FICON Express8 channels. ■■ Build fan-in, fan-out architecture designs for maximizing resource utilization. ■■ Localize failures for improved availability. ■■ Increase scalability and enable flexible connectivity for continued growth. ■■ Leverage new FICON technologies. FICON Channel Buffer Credits When IBM introduced the availability of FICON Express8 channels, one very important change was the number of buffer credits available on each port per 4-port FICON Express8 channel card. While FICON Express4 channels had 200 buffer credits per port on a 4-port FICON Express4 channel card, this changed to 40 buffer credits per port on a FICON Express8 channel card. Organizations familiar with buffer credits will recall that the number of buffer credits
  • 6. WHITE PAPER 6 required for a given distance varies directly in a linear relationship with link speed. In other words, doubling the link speed would double the number of buffer credits required to achieve the same performance at the same distance. Also, organizations might recall the IBM System z10® Statement of Direction concerning buffer credits: "The FICON Express4 features are intended to be the last features to support extended distance without performance degradation. IBM intends to not offer FICON features with buffer credits for performance at extended distances. Future FICON features are intended to support up to 10km without performance degradation. Extended distance solutions may include FICON directors or switches (for buffer credit provision) or dense wavelength division multiplexers [DWDM] (for buffer credit simulation)." IBM held true to its statement, and the 40 buffer credits per port on a FICON Express8/FICON Express8S channel card can support up to 10km of distance for full-frame size I/Os (2KB frames). What happens if an organization has I/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It is likely that at faster future link speeds, the distance supported will decrease to 5km or less. A switched architecture allows organizations to overcome the buffer credit limitations on the FICON Express8/FICON Express8S channel card. Depending upon the specific model, FICON directors and switches can have more than 1300 buffer credits available per port for long-distance connectivity. Fan-In, Fan-Out Architecture Designs In the late 1990s, the open systems world started to implement Fibre Channel storage area networks (SANs) to over- come the low utilization of resources inherent in a direct-attached storage architecture. SANs addressed this issue through the use of fan-in and fan-out storage network designs. That is, multiple server host bus adapters (HBAs) could be connected through a Fibre Channel switch to a single storage port: in other words, fan-in. Or, a single-server HBA could be connected through a Fibre Channel switch to multiple storage ports: that is, fan-out. These same prin- ciples apply to a FICON storage network. As a general rule, FICON Express8 and FICON Express8S channels offer different levels of performance, in terms of IOPS and bandwidth, than the storage host adapter ports to which they are connected. Therefore, a direct-attached FICON storage architecture may see very low channel or storage port utilization rates. To overcome this issue, fan-in and fan-out storage network designs are used. A switched FICON architecture allows a single channel to fan out to multiple storage devices via switching, improv- ing overall resource utilization. This capability can be especially valuable if an organization's environment has newer FICON channels, such as FICON Express8 or Express8S, but older tape drive technology. Figure 1 illustrates how a single FICON channel can concurrently keep several tape drives running at full-rated speeds. The actual fan-out ratios for connectivity to tape drives will, of course, depend on the specific tape drive and control unit; however, it is not unusual to see a FICON Express8 or Express8S channel fan-out from a switch to 5 to 6 tape drives (a 1:5 or 1:6 fan-out ratio). The same principles apply for fan-out to storage systems. The exact fan-out ratio is dependent on the storage system model and host adapter capabilities for IOPS and/or bandwidth. On the other hand, several FICON channels could be connected through a director or switch to a single storage port to maximize the port utilization and increase overall I/O efficiency and throughput.
  • 7. WHITE PAPER 7 Figure 1. Switched FICON allows 1 channel to keep multiple tape drives fully utilized. Keep Failures Localized In a direct-attached architecture, a failure anywhere in the path renders both the channel interface and the control unit port unusable. The failure could be of: an entire FICON channel card, a port on the channel card, the cable, the entire storage host adapter card, or an individual port on the storage host adapter card. In other words, a failure on any of these components will affect both the mainframe connection and the storage connection. The worst possible reliabil- ity, availability and serviceability for FICON-attached storage are provided with a direct-attached architecture. With a switched architecture, failures are localized to only the affected FICON channel interface or control unit inter- face, not both. The nonfailing side remains available, and if the storage side has not failed, other FICON channels can still access that host adapter port via the switch or director (see Figure 2). This failure isolation, combined with fan-in and fan-out architectures, allows for the most robust storage architectures, minimizing downtime and maximizing availability.
  • 8. WHITE PAPER 8 Figure 2. A FICON director isolates faults and improves availability. Scalable and Flexible Connectivity Direct-attached FICON does not easily allow for dynamic growth and scalability, since a single FICON channel card port is tied to a single dedicated storage host adapter port. In such an architecture, there is a 1:1 relationship (no fan-in or fan-out). Since there is a finite number of FICON channels available (dependent on the mainframe model or machine type), growth in a mainframe storage environment with such an architecture can pose a problem. What hap- pens if an organization needs more FICON connectivity, but has run out of FICON channels? FICON switching and proper usage of fan-in and fan-out in the storage architecture design will go a long way toward improving scalability. In addition, best-practice storage architecture designs include room for growth. With a switched FICON architecture, adding a new storage system or port in a storage system is much easier: Simply connect the new storage system or port to the switch. This eliminates the need to open the channel cage in the mainframe to add new channel inter- faces, reducing both capital and operational expenditures (capex and opex). This also gives managers more flexible planning options when upgrades are necessary, since the urgency of upgrades is lessened. What about the next generation of channels? The bandwidth capabilities of channels are growing at a much faster rate than those of storage devices. As channel speeds increase, switches will allow data center managers to take advantage of new technology as it becomes available, while protecting investments and minimizing costs. Also, it is an IBM best-practice recommendation to use single-mode long-wave connections for FICON channels. Storage vendors, however, often offer single-mode long-wave connections and multimode short-wave connec- tions on their storage systems, allowing organizations to decide which to use. The organization makes the decision based on the trade-off between cost and reliability. Some organizations' existing storage devices have a mix of single-mode and multimode connections. Since they cannot directly connect a single-mode FICON channel to a
  • 9. WHITE PAPER 9 multimode storage host adapter, this could pose a problem. With a FICON director or switch in the path, however, organizations do not need to change the storage host adapter ports to comply with the single-mode best-practice recommendation for the FICON channels. The FICON switching device can have both types of connectivity. It can have single-mode long-wave ports for attaching the FICON channels, and multimode short-wave ports for attaching the storage. Furthermore, FICON switching elements at 2 different locations can be interconnected by fiber at distances up to 100km or more, creating a cascaded FICON switched architecture. This setup is typically used in disaster recovery and business continuance architectures. As previously discussed, FICON switching allows resources to be shared. With cascaded FICON switching, those resources can be shared between geographically separated locations, allowing data to be replicated or tape backups to be made at the alternate site, away from the primary site, with no performance loss. Often, workloads will be distributed such that both the local and remote sites are primary produc- tion sites, and each site uses the other as its backup. While the fiber itself is relatively inexpensive, laying new fiber may require an expensive construction project. While DWDM can help get more out of fiber connections, inter-switch links with up to 16Gb/sec of bandwidth are offered by switch vendors and can reduce the cost of or even eliminate the need for DWDM. FICON switches maximize utili- zation of this valuable intersite fiber by allowing multiple environments to share the same fiber link. In addition, FICON switching devices offer unique storage network management features, such as ISL trunking and preferred pathing, which are not available with DWDM equipment. FICON switches allow data center managers to further exploit intersite fiber sharing by enabling them to intermix FICON and native Fibre Channel Protocol (FCP) traffic, which is known as Protocol Intermix Mode, or PIM. Even in data centers where there is enough fiber to separate FICON and open systems traffic, preferred pathing features on a FICON switch can be great cost savers. With preferred paths established, certain cross-site fiber can be allocated for the mainframe environment, while other fiber can be allocated for open systems. The ISLs can be configured such that in the event of a failure, and only in the event of an ISL failure, the links would be shared by both open systems and mainframe traffic. Leverage New Technologies Over the past 5 years, IBM has announced a series of technology enhancements that require the use of switched FICON. These include: ■■ N_port ID virtualization (NPIV) support for z Linux. ■■ FICON Dynamic Channel-Path Management (DCM). ■■ z/OS FICON Discovery and Auto-Configuration (zDAC). IBM announced support for NPIV on z Linux in 2005. Today, NPIV is supported on the System z9® , z10, z196, z114, zEC12 and zBC12. Until NPIV was supported on System z, adoption of Linux on System z had been relatively slow. NPIV allows for full support of LUN masking and zoning by virtualizing the Fibre Channel identifiers. This, in turn, allows each Linux on System z image to appear as if it has its own individual HBA when those images are, in fact, sharing FCP channels. Since IBM began supporting NPIV on System z, adoption of Linux on System z has grown significantly. IBM believes approximately 24% of MIPS shipping on new zEnterprise processors are for Linux on System z implementations. Implementation of NPIV on System z requires a switched architecture. FICON DCM is another feature that requires a switched FICON architecture. FICON DCM provides the ability to have System z automatically manage FICON I/O paths connected to storage systems in response to changing workload demands. Use of FICON DCM helps simplify I/O configuration planning and definition, reduces the complexity of
  • 10. WHITE PAPER 10 managing I/O, dynamically balances I/O channel resources, and enhances availability. FICON DCM can best be sum- marized as a feature that allows for more flexible channel configurations, by designating channels as "managed," and proactive performance management. FICON DCM requires a switched FICON architecture because topology infor- mation is communicated via the switch or director. The FICON switch must have a control unit port (CUP) license and be configured or defined as a control unit in the hardware configuration definition (HCD). z/OS FICON Discovery and Auto-Configuration (zDAC) is the latest technology enhancement for FICON. IBM intro- duced zDAC as a follow-on to an earlier enhancement in which the FICON channels log into the Fibre Channel name server on a FICON director. zDAC enables the automatic discovery and configuration of FICON-attached DASD and tape devices. Essentially, zDAC automates a portion of the HCD Sysgen process. zDAC uses intelligent analysis to help validate the System z and storage definitions' compatibility, and uses built-in best practices to help configure for high availability and avoid single points of failure. zDAC is transparent to existing configurations and settings. It is invoked and integrated with the z/OS HCD and z/OS Hardware Configuration Manager (HCM). zDAC also requires a switched FICON architecture. IBM also introduced support for transport-mode FICON (known as z High Performance FICON, or zHPF) in October 2008 and announced enhancements in July 2011. While not required for zHPF, a switched architecture is recommended. Business Reasons for a Switched FICON Architecture In addition to the technical reasons described earlier, the following business reasons support implementing a switched FICON architecture: ■■ Enable massive consolidation in order to reduce capex and opex. ■■ Improve application performance at long distances. ■■ Support growth and enable effective resource sharing. Massive Consolidation With NPIV support on System z, server and I/O consolidation is very compelling (see Figure 3). IBM undertook a well-publicized project at its internal data centers (Project Big Green) and consolidated 3900 open systems servers onto 30 System z mainframes running Linux. IBM's total cost of ownership (TCO) savings were calculated, taking into account footprint reductions, power and cooling, and management simplification costs. The result was nearly 80% TCO savings for a 5-year period. This scale of TCO savings is why 24% of new IBM mainframe processor shipments are now being used for Linux. Implementation of NPIV requires connectivity from the FICON (FCP) channel to a switching device (director or smaller port-count switch) that supports NPIV. A special microcode load is installed on the FICON channel to enable it to function as an FCP channel. NPIV allows the consolidation of up to 255 z Linux images ("servers") behind each FCP channel, using 1 port on a channel card and 1 port on the attached switching device for connecting these virtual servers. This enables massive consolidation of many HBAs, each attached to its own switch port in the SAN. As a best practice, IBM currently recommends configuring no more than 32 Linux images per FCP channel. This level of I/O consolidation was possible prior to NPIV support on System z. However, implementing LUN masking and zoning in the same manner as with open systems servers, SAN and storage was not possible prior to the support for NPIV with Linux System z. NPIV implementation on System z has also resulted in consolidation and adoption of a common SAN for distributed or open systems (FCP) and mainframe (FICON), commonly known as protocol intermix mode (PIM). While IBM has
  • 11. WHITE PAPER 11 supported PIM in System z environments since 2003, adoption rates were low until NPIV implementations for Linux for System Z picked up with the introduction of System z10 in 2008. With z10, enhanced segregation and security beyond simple zoning was possible through switch partitioning or virtual fabrics and logical switches. With 24% of new mainframes being shipped for use with Linux on System z, it is safe to say that at least 19% of mainframe envi- ronments are now running a shared PIM environment. Leveraging enhancements in switching technology, performance and management, PIM users can now fully populate the latest high-density directors with minimal or no oversubscription. They can use management capabilities, such as virtual fabrics or logical switches to fully isolate open systems ports and FICON ports in the same physical direc- tor chassis. Rather than having more partially populated switching platforms that are dedicated to either mainframe (FICON) or open systems (FCP), PIM allows for consolidation onto fewer physical switching devices. It reduces man- agement complexity and improves resource utilization. This, in turn, leads to lower operating costs, and a lower TCO for the storage network. It also allows for a consolidated, simplified cabling infrastructure. Figure 3. Organizations implement NPIV to consolidate I/O in z Linux environments. Application Performance Over Distance As previously discussed, the number of buffer credits per port on a 4-port FICON Express8 channel has been reduced to 40, supporting up to 10km without performance degradation. What happens if an organization needs to go beyond 10km for a direct-attached storage configuration? They will likely see performance degradation due to insufficient buffer credits. Without a sufficient quantity of buffer credits, the "pipe" cannot be kept full with streaming frames of data.
  • 12. WHITE PAPER 12 Switched FICON avoids this problem (see Figure 4). FICON directors and switches have a sufficient quantity of buffer credits available on ports to allow them to stream frames at full-line performance rates with no bandwidth degrada- tion. IT organizations that implement a cascaded FICON configuration between sites can, with the latest FICON director platforms, stream frames at 16Gb/sec rates. And they experience no performance degradation for sites that are 100km apart. Switched FICON technology also allows organizations to take advantage of hardware-based FICON protocol accel- eration or emulation techniques for tape (reads and writes). This emulation technology is available on standalone extension switches or on a blade in FICON directors. It allows the z/OS-initiated channel programs to be acknowl- edged locally at each site and avoids the back-and-forth protocol handshakes that normally travel between remote sites. It also reduces the impact of latency on application performance and delivers local-like performance over unlim- ited distances. In addition, this acceleration or emulation technology optimizes bandwidth utilization. Why is bandwidth efficiency so important? It is typically the most expensive budget component in an organization's multisite disaster recovery or business continuity architecture. Anything that can be done to improve the utilization and/or reduce the bandwidth requirements between sites would likely lead to significant TCO savings. Figure 4. Switched FICON with emulation allows optimized performance and bandwidth utilization over extended distance.
  • 13. WHITE PAPER 13 Enable Growth and Resource Sharing Direct-attached storage forces a 1:1 relationship between host connectivity and storage connectivity. In other words, each storage port on a storage system host adapter requires its own physical port connection on a FICON Express8 channel card. These channel cards are typically very expensive on a per-port basis: typically 4 to 6 times the cost of a FICON director port. Also, there is a finite number of FICON Express8S channels available on a zEnterprise (a maxi- mum of 320), as well as a finite number of host adapter ports in the storage system. If an organization has a large configuration and a direct-attached FICON storage architecture, how does it plan to scale its environment? What happens if an organization acquires a company and needs additional channel ports? A switched FICON infrastructure allows cost-effective, seamless expansion to meet growth requirements. Direct-attached FICON storage also typically results in underutilized host channel card ports and host adapter ports in storage systems. FICON Express8 and FICON Express8S channels can comfortably perform at high-channel uti- lization rates, and a direct-attached storage architecture typically sees channel utilization rates of 10% or less. As illustrated in Figure 5, leveraging FICON directors or switches allows organizations to maximize channel utilization. Figure 5. Switched FICON drives improved channel utilization, while preserving CHPIDs for growth.
  • 14. WHITE PAPER 14 It also is very important to keep traffic for tape drives streaming, and to avoid stopping and starting the tape drives. These actions lead to unwanted wear and tear of tape heads, cartridges and the tape media itself. Using FICON acceleration or emulation techniques, as described earlier, streaming can be accomplished with a configuration simi- lar to the one shown in Figure 6. Such a configuration requires solid analysis and planning, but it will pay dividends for an organization's FICON tape environment. Figure 6. A well-planned configuration can maximize CHPID capacity utilization for FICON tape efficiency. Finally, switches facilitate fan-in, which allows different hosts and LPARs whose I/O subsystems are not shared to share the same assets. While some benefits may be realized immediately, the potential for value in future equipment planning can be even greater. With the ability to share assets, equipment that would be too expensive for a single environment can be deployed in a cost-saving manner. The most common example is to replace tape farms with virtual tape systems. By reducing the number of individual tape drives, maintenance (service contracts), floor space, power, tape handling and cooling costs are reduced. Virtual tape also improves reliable data recovery, allows for sig- nificantly shorter recovery time objectives (RTO) and nearer recovery point objectives (RPO), and offers features such as peer-to-peer copies. However, without the ability to share these systems, it may be difficult to amass sufficient cost savings to justify the initial cost of virtual tape. And the only practical way to share these standalone tape sys- tems or tape libraries is through a switch. With disk storage systems, in addition to sharing the asset, it is sometimes desirable to share the data across mul- tiple systems. The port limitations on a storage system may prohibit or limit this capability using direct-attached (point-to-point) FICON channels. Again, the switch can provide a solution to this issue. Even when there is no need to share devices during normal production, this capability can be very valuable in the event of a failure. Data sets stored on tape can quickly be read by CPUs picking up workload that is already attached to the same switch as the tape drives. Similarly, data stored on a storage system can be available as soon as a fault is determined. Switch features, such as preconfigured port prohibit or allow matrix tables, can ensure that access intended only for a disaster scenario is prohibited during normal production.
  • 15. WHITE PAPER 15 Why Switched FICON: Summary Direct-attached FICON might appear to be a great way to take advantage of FICON technology's advances over ESCON. However, a closer examination shows that switched FICON, similar to switched ESCON, is a better, more robust architecture for enterprise data centers. Switched FICON offers: ■■ Better utilization of host channels and their performance capabilities. ■■ Scalability to meet growth requirements. ■■ Improved reliability, problem isolation and availability. ■■ Flexible connectivity to support evolving infrastructures. ■■ More robust business continuity implementations via cascaded FICON. ■■ Improved distance connectivity, with improved performance over extended distances. ■■ New mainframe I/O technology enhancements such as NPIV, FICON DCM, zDAC and zHPF. Switched FICON also provides many business advantages and potential cost savings, including: ■■ The ability to perform massive server, I/O and SAN consolidation, dramatically reducing capex and opex. ■■ Local-like application performance over any distance, allowing host and storage resources to reside wherever busi- ness dictates. ■■ More effective resource sharing, improved utilization, reduced costs and improved recovery time. The trend toward increased usage of Linux on System z is growing, and there are cost advantages of NPIV imple- mentations and PIM SAN architectures. As a result, direct-attached storage in a mainframe environment is becoming a thing of the past. Investments made in switches for disaster recovery and business continuance are likely to pay the largest dividends. Having access to alternative resources and multiple paths to those resources can result in sig- nificant savings in the event of a failure. The advantages of a switched FICON infrastructure are simply too great to ignore. Hitachi Virtual Storage Platform G1000 Hitachi Data Systems has over 25 years of experience supporting IBM mainframe environments. A large portion of the installed base of Hitachi storage systems connects to IBM z/OS and S/390® mainframes via ESCON and FICON networks. Hitachi Virtual Storage Platform G1000 builds on this experience and introduces new features and packaging to improve performance while lowering TCO. It features lower power and cooling requirements, high-density packaging based on industry-standard 19-inch racks and faster microprocessors. It also offers the choice of disk drives types, including solid-state disk (SSD), Hitachi Accelerated Flash storage, serial attached SCSI (SAS) and nearline SAS. This storage platform provides an industry-leading, reliable and highly available storage system for mainframes in IBM z/OS environments: It supports z/OS, z/VSE, z/VM and z/TPFfor zEnterprise. Many organizations are considering the benefits of or running Linux on IBM zEnterprise processors. VSP G1000 supports this capability for both CKD and FBA disk formats and provides a solid foundation for implementing private clouds. Hitachi has implemented key performance features in support of these operating systems running on zEnter- prise, including PAV, HyperPAV, z/HPF, Multiple Allegiance, MIDAW and priority I/O queuing. It also provides a unique mainframe storage management solution to deliver functionally compatible extended address volumes (EAV) for z/OS, data volume expansion (DVE), and IBM FlashCopy® SE (with space efficiency capability).
  • 16. WHITE PAPER 16 VSP G1000 is designed to be highly available and resilient. All critical components are implemented in pairs. If a component fails, the paired component can take over the workload without an outage. With its support of multiple RAID configurations, an organization's data is protected in event of a disk drive problem. Additionally, VSP G1000 offers industry-leading replication software. Its support of FlashCopy and FlashCopy SE provides the functionality of IBM z/OS Metro Mirror. And it repli- cates to remote locations using Hitachi Universal Replicator (HUR), allowing copies of data can be maintained locally and at remote locations. This practice ensures data availability in case the primary copy becomes unusable or is not accessible. Scalability Hitachi Virtual Storage Platform G1000 can scale to provide increased performance, capacity, throughput and con- nectivity while dynamically combining multiple units into a single logical system with shared resources. It can also virtualize new and existing external storage systems. This scaling means that VSP G1000 can grow nondisruptively to meet changing needs within the data center. It minimizes outages to extend the platform and enhance functional- ity while providing flexibility in the configuration and choice of disk technology to meet the specific needs of each environment. VSP G1000 controller-based storage virtualization enables connectivity to and virtualization of external storage, which can potentially extend the life of existing storage assets and reduce costs. Virtualizing external storage: ■■ Enables the reuse of existing or legacy assets for less critical or accessed data. ■■ Simplifies management of external storage with common management and data protection for internal and external storage. ■■ Supports the reuse of existing or legacy assets across data centers within a metro area network distance and across global distances with replication capabilities of the scale up storage system. Performance Hitachi Virtual Storage Platform G1000 ushers in a new level of I/O throughput, response and scalability. It supports of 8Gb FICON (FICON Express8 and FICON Express8S). It enables a single VSP G1000 FICON 8Gb port to handle higher traffic rates that can be delivered by a single zEnterprise FICON Express8 or FICON Express8S channel. This storage networking is critical to optimizing performance and maximizing throughput in mainframe environments. IBM 3390 and FICON Support This industry-leading storage system provides 3390 disk drive support through emulation across a variety of disk drive types to meet the variety of performance and capacity needs of mainframe environments. The platform supports SSD flash drives, providing ultra-high-speed response with capacities of 200GB and 400GB, Hitachi Accelerated Flash (HAF) drives, providing 1.6TB and 3.2TB FMDs as well as 2.5-inch SAS drives, and nearline SAS drives. It can control up to 65,280 logical volumes and provides an internal physical disk capacity of approximately 2.5PB per stor- age system. With externally attached storage, Hitachi Virtual Storage Platform G1000 can support up to 255PB of storage capacity. VSP G1000 supports 8Gb/sec FICON (FICON Express8 and FICON Express8S) across all front-end ports for con- nectivity to the mainframe and 8Gb/sec Fibre Channel for connecting external storage. VSP G1000 supports high-performance FICON (z/HPF) for z/OS. On the back end, it supports SAS, SATA, nearline SAS, and SSD and HAF drives, which are connected using the SAS 2 protocol with 6GB/sec connectivity per back-end port. LEARN MORE Compatibility With VSP G1000
  • 17. WHITE PAPER 17 Hitachi Dynamic Provisioning Hitachi Dynamic Provisioning (HDP) for Mainframe optimizes performance through extremely wide striping and more effective use of storage through thin provisioning (see Figure 7). In other words, it allocates storage to an application without actually mapping the corresponding physical storage until it is used. This separation of allocation from physi- cal mapping results in more effective use of physical storage with higher overall performance and rates of storage utilization. Dynamic Provisioning also enables Dynamic Volume Expansion (DVE) of 3390 volumes and FlashCopy SE for more efficient use of storage when creating local copies. Figure 7. Hitachi Dynamic Provisioning for Mainframe optimizes performance. Hitachi Dynamic Tiering Hitachi Dynamic Tiering (HDT) for Mainframe, an extension of HDP for Mainframe, enables the automatic movement of data between tiers. HDT for Mainframe provides an additional level of automated, optimized storage management by managing data across a full range of storage tiers from high-performance storage to low-cost storage. HDT for Mainframe moves highly accessed blocks of data to the highest tier storage and migrates less frequently accessed data to the lowest tiers according to simple policies. With HDT for Mainframe, there can be up to 3 storage tiers rang- ing from high-performance flash storage to low-cost storage, such as nearline SaS or virtualized external storage in the same storage pool. Tier creation is automatic, based on configuration policies, including media type and speed, RAID level and sustained I/O level requirements. This solution significantly reduces the time storage administrators have to spend analyzing storage usage and man- aging the movement of data to optimize performance and complements existing mainframe storage provisioning processes, such as DFSMS. Existing SMS storage groups and ACS routines can be aligned to differently tiered stor- age pools with pages of data being moved to the appropriate tier when needed rather than moving entire datasets.
  • 18. WHITE PAPER 18 Hitachi is the 1st storage vendor to support virtualization of external storage behind an enterprise storage platform. HDT provides the ability for the system to automatically move blocks of data within data sets, to the most appropriate class of storage based on performance and access requirements. Hitachi Tiered Storage Manager for Mainframe Hitachi Tiered Storage Manager for Mainframe (HTSM) is a z/OS software management product for Hitachi Dynamic Tiering for Mainframe that enables a user to control service levels based on performance and/or time to facilitate meeting mainframe SLAs. HDT policies managed through HTSM have the capability to hold selected data to a speci- fied tier, regardless of the access patterns. The policies can also be defined to to migrate selected data to a higher or lower tier on a predictable scheduled basis. HTSM enables Dynamic Tiering and Dynamic Tiering policies to be managed from the mainframe at the volume level or through DFSMS tools and constructs with ISPF or, optionally, REXX scripts. Hitachi Remote Replication Business continuity is more important than ever in today's business environment as demonstrated through the natu- ral disasters and physical intrusion and destruction of IT resources over the last few years. A loss of business-critical data can force a company to its knees and even into bankruptcy. In addition, regulatory compliance requirements demand a business continuity and disaster recovery plan and infrastructure to support that plan or face stiff fines and business restrictions. Hitachi remote replication offerings provide the ability to copy critical data to off-site facilities either within a metropolitan area and/or to distant remote locations. The combination of the enterprise-level Hitachi Virtual Storage Platform G1000 with Brocade's solutions to extend and optimize fabric connectivity facilitates the movement of your business-critical data over longer distances. Together, they enable and enhance your ability to sup- port business continuity and disaster recovery. Hitachi TrueCopy Hitachi TrueCopy synchronous software provides a continuous, nondisruptive, host-independent remote data repli- cation solution for disaster recovery or data migration over distances within the same metropolitan area. It provides a no-data-loss, rapid-restart solution (see Figure 8). For enterprise environments, TrueCopy synchronous software combined with Hitachi Universal Replicator on VSP G1000 allows for advanced 3-data-center configurations. This combination includes consistency across up to 12 storage systems in each site for optimal data protection. Figure 8. Hitachi TrueCopy synchronous supports business continuity and disaster recovery efforts.
  • 19. WHITE PAPER 19 TrueCopy synchronous supports business continuity and disaster recovery efforts, improving business resilience. It improves service levels by reducing planned and unplanned downtime of customer-facing applications. It enables fre- quent, nondisruptive disaster recovery testing with an online copy of current and accurate production data. TrueCopy synchronous can be seamlessly integrated into existing z/OS environments and controlled with familiar PPRC com- mands or with Hitachi Business Continuity Manager software. Hitachi Universal Replicator Hitachi Universal Replicator provides asynchronous data replication across any distance for both internal VSP G1000 storage and external storage managed by VSP G1000 (see Figure 9). Universal Replicator provides enterprise-class performance associated with storage system-based replication. At the same time, it provides resilient business conti- nuity without the need for remote host involvement, or redundant servers or replication appliances. Universal Replicator maintains the integrity of replicated copies without impacting processing, even when replication network outages occur or optimal bandwidth is not available. When compared to traditional methods of storage- system-based replication, Universal Replicator leverages performance-optimized disk-based journals, resulting in significantly reduced cache utilization and increased bandwidth utilization. Universal Replicator ensures availability of up-to-date copies of data in up to 3 dispersed locations by leveraging the synchronous capabilities of Hitachi TrueCopy synchronous. In the event of a disaster at the primary data center, the delta resync feature of Universal Replicator enables fast failover and restart of the application without loss of data, whether at the local or remote data center. Figure 9. Hitachi Universal Replicator ensures availability of current copies of data in up to 3 dispersed locations.
  • 20. WHITE PAPER 20 Universal Replicator can be integrated into an IBM GDPS® environment, providing a much more cost-effective and complete recovery solution than the IBM alternative of z/OS Global Mirror (XRC). With Universal Replicator and TrueCopy synchronous support of a 3-data-center replication solution, VSP G1000 supports delta resync, which is similar to but more efficient than z/OS Metro or Global Mirror incremental resync. VSP G1000 also supports IBM z/OS Basic HyperSwap® , which is enabled by IBM Tivoli Productivity Center for Replication for System z Basic Edition (TPC-R). TPC-R enables the administrator to develop a z/OS Basic HyperSwap configuration using VSP G1000. Using VSP G1000, the organization can create a z/OS Basic HyperSwap plan. Create a 2-data-center configuration with TrueCopy synchronous or a 3-data-center configuration with TrueCopy synchronous, Universal Replicator and Business Continuity Manager. Initially, VSP G1000 will support up to 12x12 (2DC) and 12x12x12 (3DC) systems at each site1 . VSP G1000 with Universal Replicator will support a 4-data-center configuration and allow you to have 2 long asyn- chronous data paths and 2 synchronous paths. This solution offers you the ability to create multiple copies of data in many locations and reduce the impact of data migration. Hitachi Business Continuity Manager Hitachi Business Continuity Manager enables centralized, enterprise-wide replication management for IBM z/OS mainframe environments. Through a single, consistent interface based on the Time Sharing Option/Interactive System Productivity Facility (TSO/ISPF) it uses full-screen panels to automate Hitachi Universal Replicator, Hitachi TrueCopy synchronous (including multisite topologies) and in-system Hitachi ShadowImage Replication software operations. This software feature automates complex disaster recovery and planned outage functions, resulting in reduced recov- ery times. It also enables advanced, 3-data-center disaster recovery configurations and extended consistency group capabilities. Business Continuity Manager provides built-in capabilities for monitoring and managing critical perfor- mance metrics and thresholds for proactive problem avoidance. It also delivers autodiscovery of enterprise-wide storage configuration and replication objects, eliminating tedious, error-prone data entry that can cause outages. Business Continuity Manager integrates with the Hitachi replication management framework, Hitachi Replication Manager software, for replication monitoring and continuous operations in mainframe (and open system) environments. Multiplatform Support Hitachi Virtual Storage Platform G1000 can support multiple operating systems at the same time. Although many mainframe organizations have been reluctant to share their storage platforms with open systems servers, the need to share storage is becoming more important: Organizations are implementing Linux on System z. In addition, the introduction of IBM zEnterprise BladeCenter Extension (zBX) for mainframe processors enables Microsoft Windows to operate as part of zEnterprise servers, VSP G1000 can be configured to facilitate the isolation of disparate types of data. VSP G1000 easily supports organizations implementing private as well as some public clouds on zEnterprise servers using either Linux on System z or Windows on zBX. Additionally, the FICON and Fibre Channel ports are completely separate and help ensure that critical mainframe data cannot be accessed directly by open systems serv- ers or clients. Cost-Savings Efficiencies This storage system is designed to lower TCO wherever possible. The physical packaging has been designed to use standard-size racks and chassis. The internal layout supports front-to-back airflow, to facilitate the use of hot 1 Check with your HDS Representative for currently supported configurations.
  • 21. WHITE PAPER 21 and cold aisles and maximize the efficiency of data center cooling. In combination with very fast processors, denser packaging and smaller batteries, the physical floor space and the heating and cooling requirements result in very low power per square foot (KVA/sq ft.). Opex is lower than previous systems thanks to denser packaging, blade archi- tecture, low power memory, small form factor disks, SSD and flash-protected cache with its smaller batteries. Hitachi Data Systems is committed to continuing to deliver more efficient packaging, resulting in more sustainable products. Brocade Gen5 DCX 8510 in Mainframe Environments Now on its 5th-generation (1G, 2G, 4G, 8G and 16G) of switching technology (Gen5), Brocade has the experience to rely on. The company has been in the mainframe storage networking business for more than 20 years, as far back as the parallel channel extension technology of the late 1980s. Brocade has a history of thought leadership. It has 4 of its own FICON patents, as well as 5 FICON joint patents with IBM on technologies, such as the FICON bridge card and control unit port (CUP). Brocade helped IBM develop Fibre Connection (FICON), and in 2000 the 1st certified IBM FICON network infrastructure, using 1Gb/sec ED5000 Directors, was deployed. Brocade has the only FICON archi- tecture certification program (BCAF) in the industry. Brocade manufactured the 9032-5 ESCON director for IBM, and pioneered ESCON channel extension emulation technology. Brocade has continued its heritage of mainframe storage networking thought leadership with 9 generations of FICON directors. These products include the current industry- leading FICON directors, such as the DCX and DCX 8510, and FICON channel extension, such as the Brocade 7800 and FX8-24 extension blade. Reliability, Availability and Serviceability The largest corporations in the world literally run their businesses on mainframes. Government institutions in many countries worldwide also rely on the mainframe for their critical computing needs. RAS qualities for these mission- critical environments are of the utmost importance. Mainframe practitioners in these organizations avoid risk at all costs. They never want to suffer an unscheduled outage, and they want to minimize if not outright eliminate, sched- uled or planned outages. Mainframes such as the IBM zEnterprise have historically been the rock-solid pillar in terms of computing RAS. Mainframe practitioners have a history of creating I/O infrastructures that have "5 nines" availabil- ity. For FICON channel connectivity to mainframe-attached storage, these same organizations have a requirement for a FICON director platform that offers the same levels of RAS as the mainframe, itself. The Brocade Gen5 DCX 8510 is the ideal FICON director for these RAS requirements. The Brocade Gen5 DCX 8510 FICON Director features a modular, high-availability architecture that supports these mission-critical mainframe environments. The Brocade Gen5 DCX 8510 chassis has been engineered from incep- tion for "5 nines" of availability by providing multiple fans (supporting hot aisle-cool aisle), multiple fan connectors, dual-core blade internal connectivity, dual-control processors, dual power supplies, a passive backplane and dual-I/O timing clocks. These features and the switching design of the Brocade Gen5 DCX 8510 result in leading mean time between failure (MBTF) and mean time to recovery or repair (MTTR) numbers. In a recent study performed with a sample size of 26,593 Brocade products, the average yearly downtime was .53 minutes per year, for an availability rate of 99.99984%. It is this kind of availability that consistently leads OEM partners such as Hitachi Data Systems to praise Brocade products for their quality. Proactive Performance Management Brocade Fabric Vision was developed as part of a continuing effort to improve overall application availability and reduce complexity. It is an optional licensed suite of monitoring features in the Fabric Operating System (FOS) that runs on the Fibre Channel switches and directors. Brocade Network Advisor, an easy-to-use interface, provides users with set of tools to display and manage the Fabric Vision features. All Fabric Vision features can be displayed on the Network Advisor dashboard. Brocade Fabric Vision requires FOS 7.2.0d for System z environments, and is managed with Brocade Network Advisor 12.1.3.
  • 22. WHITE PAPER 22 Brocade Fabric Vision technology provides a breakthrough hardware and software solution that maximizes uptime, simplifies FICON SAN management and provides unprecedented visibility and insight across the storage network. Offering innovative diagnostic, monitoring and management capabilities, Fabric Vision technology helps administra- tors avoid problems, maximize application performance and reduce operational costs. IT organizations with large, complex or highly virtualized data center environments often require advanced tools to help them more effectively monitor and manage their storage infrastructure. Developed with these IT organizations in mind, Fabric Vision tech- nology also includes several breakthrough diagnostic, monitoring and management capabilities. These capabilities dramatically simplify day-to-day FICON SAN administration and provide unprecedented visibility across the storage network. Fabric Vision technology is tightly integrated with Brocade Network Advisor, providing customizable health and performance dashboard views. These views help to pinpoint problems faster, simplify SAN configuration and man- agement, and reduce operational costs. Through Brocade Network Advisor, administrators can: ■■ Quickly and easily configure and monitor data center fabrics based on Brocade's Monitoring and Alerting Policy Suite (MAPS) groups and policies. ■■ Identify, monitor and analyze data and application flows to maximize performance. ■■ Reduce time spent on repetitive tasks by deploying MAPS policies and rules across the fabric, or multiple fabrics, from a single dialog. ■■ Run diagnostic tests on optics and cables to quickly identify and isolate potential fabric issues. ■■ Automatically monitor and detect network congestion in the fabric, and identify which devices or hosts are impacted by a bottlenecked port. Scalability With the advent of the zBX and the zEnterprise Unified Resource Manager, private cloud computing centered on the IBM zEnterprise has emerged as a "hot topic." Cloud computing requires a highly scalable (hyper-scale) storage net- working architecture to support it. Hyper-Scale Inter-Chassis Link (ICL) is a unique Brocade Gen5 DCX 8510 feature that provides connectivity among 2 or more Brocade 8510-4 or 8510-8 chassis. This is the 2nd generation of ICL technology from Brocade with optical QSFP (Quad Small Form Factor). The 1st generation used a copper connector. Each ICL connects the core routing blades of two 8510 chassis and provides up to 64Gb/sec of throughput within a single cable. The Brocade 8510-8 allows up to 32 QSFP ports, and the 8510-4 allows up to 16 QSFP ports to help preserve switch ports for end devices. This 2nd generation of Brocade optical ICL technology, based on QSFP technology, provides a number of benefits to the organization. Brocade has improved ICL connectivity over the use of copper connectors by upgrading to an optical form factor. With this improvement, Brocade has also increased the distance of the connection from 2 meters to 50 meters. QSFP combines 4 cables into 1 cable per port, significantly reducing the number of ISL cables the customer needs to run. Since the QSFP connections reside on the core blades within each 8510, they do not use up connections on the slot line cards. This improvement frees up to 33% of the available ports for additional server and storage connectivity. Dual-chassis backbone topologies connected through low-latency ICL connections are ideal in a FICON environment. The majority of FICON installations have switches that are connected in dual or triangular topologies, using ISLs to meet the FICON requirement for low latency between switches. New 64Gb/sec QSFP based ICLs enable simpler, flatter, low-latency chassis topologies spanning a distance of up to 50 meters with off-the-shelf cables. They reduce interswitch cables by 75% and preserve 33% of front-end ports for servers and storage, leading to fewer cables and more usable ports in a smaller footprint.
  • 23. WHITE PAPER 23 Pair the 2 Platforms Together Traditional (z/OS) Mainframe Environments In a "traditional" z/OS mainframe environment, RAS, as well as performance are the key concerns to most orga- nizations. These characteristics provide the stability for the mainframe-based applications, on which the largest companies in the world run their businesses. Dr. Thomas E. Bell, winner of the Computer Measurement Group (CMG) Michelson Award for lifetime achievement in the computer performance field, once famously commented that "all CPUs wait at the same speed." Likewise, Dr. Steve Guendert, a CMG Board member has commented in his blog that "The IBM zEnterprise is a hungry machine, and its users need to feed the I/O beast." Response time means money in these environments. The ability to process transactions more rapidly provides companies a competitive advantage in today's financial industry. Hitachi Virtual Storage Platform G1000 and Brocade DCX 8510, together, make sure the "I/O beast is fed." Linux on the Mainframe A 2011 IDC report indicated that of all the mainframes being shipped, approximately 19% of the processing power is intended for Linux. And IBM has been quoted as saying that 32% of IBM's zEnterprise installed base is running integrated facility for Linux (IFL) specialty engines. Regardless of whether Linux is running as a guest under z/VM or natively in an LPAR, it is an important trend that cannot be ignored. This trend has been growing since the 2005 introduction of support for NPIV on System z. IT organizations are realizing that there are significant cost savings to be realized by moving to Linux on System z, and these cost savings are in terms of hardware acquisition, software licensing and operational costs, such as power and cooling. Hitachi VSP G1000 and Brocade Gen5 DCX 8510 are the ideal choice for these Linux environments. VSP G1000 offers very powerful virtualization, support for NPIV and both Dynamic Provisioning and Dynamic Tiering. Brocade Gen5 DCX 8510 offers full support for NPIV, and its Virtual Fabrics functionality allows for highly secure separation of the z/OS data traffic from the Linux traffic on the FICON director. FICON and FCP Intermix FICON and FCP Intermix, or PIM is another growing trend in mainframe environments. Linux on System z has been the major driver of this trend, as its very nature often leads to mainframe end users using FCP channels and FICON channels on the mainframe. IBM's recent announcement and GA of support for Windows blade servers on the zEnterprise Blade Center Extension (zBX) is likely to see even further acceptance of PIM as a storage networking architecture. The virtualization, performance, scalability and tiering capabilities of Hitachi VSP G1000 make it an ideal disk storage platform for a PIM storage architecture. The performance and virtual fabrics capabilities, coupled with the immense number of open systems SAN experience at Brocade make the DCX 8510 the ideal director platform to go along with VSP G1000 in a PIM architecture. Private Cloud The ideas behind cloud computing are well known to experienced mainframers, who remember "service bureau com- puting." Private cloud computing is a "hot topic." It is seeing a lot of adoption, and the concept of IBM zEnterprise Systems at the center of a private cloud is gaining a lot of traction. Private cloud computing relies on extensive virtu- alization. This virtualization is not just at the server and application; it is at everything in the data center, most notably with the storage devices and the network. Hitachi Virtual Storage Platform G1000 paired with Brocade Gen5 DCX 8510 creates the ideal architecture for a mainframe-centric private cloud.
  • 24. WHITE PAPER 24 Conclusion A networked FICON storage architecture for your mainframe is a well-documented industry best practice for a wide variety of reasons, both technical and financial. Networked storage architectures beat direct-attached architectures in terms of RAS, performance, scalability and long-run costs. The latest I/O enhancements to IBM mainframes, such as Dynamic Channel-path Management (DCM) and System z Discovery and Configuration (zDAC), require a networked storage architecture (with FICON directors) if the end user wishes to take advantage of them. The IBM zEnterprise offers unprecedented performance, scalability and innovative new features, such as the zBX, as well as support for Windows. To take full advantage of a zEnterprise requires the end user to have an equally capable storage system and FICON director platform for connectivity. Hitachi Virtual Storage Platform G1000 paired with Brocade Gen5 DCX 8510 is the ideal combination with zEnterprise mainframes. It is ideal whether intended for a traditional z/OS, Linux, PIM or private cloud environment. Hitachi Data Systems and Brocade have the experience to rely on VSP G1000 and DCX 8510, the best platforms in the industry for mainframe data centers.
  • 25. © Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Universal Storage Platform, ShadowImage and TrueCopy are trademarks or registered trademarks of Hitachi Data Systems Corporation. IBM, FICON, ESCON, System z, z/OS, zEnterprise, z/VM, z9, z10, s/390, z/VSE, FlashCopy, XRC, GDPS, HyperSwap and DS8000 are trademarks or registered trademarks of International Business Machines. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. WP-432-D DG April 2014 Corporate Headquarters 2845 Lafayette Street Santa Clara, CA 95050-2639 USA www.HDS.com community.HDS.com Regional Contact Information Americas: +1 408 970 1000 or info@hds.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com