A Secure and Reliable Document Management System is Essential.docx
BigMemory Cuts Mainframe Costs by 80% for Top Global Reservation System
1. BigMemory Reduces
Mainframe Costs
Big Results for a Top Global
Reservation System
This white paper documents the deployment of
Terracotta’s BigMemory to increase capacity and reduce
mainframe use for one of the largest international
reservation systems in production today. Among the
results of the deployment were a reduction of 500 million
daily mainframe transactions (80 percent of daily load), 50
percent faster response times, a 20x increase in capacity
and 99.99 percent uptime.
The Challenge
The customer was faced with the challenge of expanding capacity to support rapidly growing
traffic while simultaneously protecting core business functions, providing additional value-added
services and significantly reducing costs.
The existing production system relied on an IBM®
System z®
mainframe to manage all business-
critical transactional data. The mainframe was capable of a maximum of 10,000 transactions per
second (TPS), where each transaction translated into a business request (read or write) for a blob
of data. The average payload of each request was 50 kilobytes (KB).
Adding more capacity to the mainframe was cost-prohibitive for new initiatives. The customer
initiated development of a new middleware architecture that would run on inexpensive
commodity hardware and scale independently of the mainframe, yielding a higher return for new
initiatives and lowering capital expenditure for the core business.
A major part of the proposed middleware architecture consisted of a common data service layer
that would store critical business data in ultra-fast machine memory, backed by the mainframe
as the system of record.
TABLE OF CONTENTS
1 The Challenge
2 Customer Requirements
2 Initial Architecture
2 Solution Architecture with Terracotta
BigMemory
3 BigMemory’s In-Memory Data Management
Layer
4 Terracotta Server Array
4 Conclusion
BUSINESS WHITE PAPER
Get There Faster
2. Customer Requirements
Scalability The service must scale to meet business growth requirements
while keeping operational and development costs to a
minimum.
Availability The service must meet the cross-enterprise Service Level
Agreement (SLA) of 99.99 percent uptime.
Performance The service must match the transactional capacity of the
mainframe.
Operations The service should provide a rich monitoring and management
tool set.
Initial Architecture
The architecture prior to the introduction of the Terracotta BigMemory data layer consisted of
clusters of multiple applications connected to a back-end mainframe via MQSeries®
for TPF.
Solution Architecture with Terracotta BigMemory
The solution architecture used Terracotta BigMemory to replace the mainframe for more than
99 percent of the read and write transactions. The data access layer was re-implemented as
a scalable in-memory service behind a message queue. The in-memory service is available
enterprise-wide, providing a common, scalable means to offload mainframe usage with predict-
able performance and latency.
Data lookups are read from the in-memory store, faulting to the mainframe only on a cache
miss. Data updates are written directly to the in-memory store and written asynchronously to the
mainframe via a durable write-behind queue.
Business White Paper | BigMemory Reduces Mainframe Costs
16 application servers
3,500 TPS
Travel Agent Network
100’s of application servers
4,500 TPS
Web Services Cluster
12 application servers
1,000 TPS
Major Travel Website
MQ/TPF
IBM Series z Mainframe
IBM Series z Mainframe
Figure 1: Initial architecture without Terracotta’s distributed cache
Get There Faster2
3. Business White Paper | BigMemory Reduces Mainframe Costs
The customer’s 500-millisecond SLA requires that cache lookups happen very fast. To minimize
latency, the in-memory service uses a layered caching strategy that keeps hot data in memory as
close to upstream applications as possible.
The top layer (“L1 Cache Layer” in Figure 3) is a scalable cluster of Java®
processes on commodity
hardware that implements the cache service’s message-oriented get/put API. The L1 cache layer
is backed by a scalable and highly available Terracotta server array (“L2 Cache Layer” in Figure 3)
that also runs on commodity hardware.
BigMemory’s In-Memory Data Management Layer
Each L1 node uses the Ehcache library to address cached data. The Ehcache library transparently
keeps a hot set of cache data in memory for low-latency access. For operations on a cache
element not already in memory, Ehcache automatically requests that cache entry from the
Terracotta server array.
The L1 layer is fault tolerant and highly available. Should an L1 node fail, its unanswered cache
requests will be handled by another L1 node. All in-memory data is backed by BigMemory’s
Terracotta server array, which is fault tolerant and highly available. The L1 layer is also
independently scalable as L1 nodes may be added to meet increasing service load.
16 application servers
3,500 TPS
Durable write-
behind queue
Lookup on
Cache miss
Travel Agent Network
IBM Series z Mainframe
100’s of application servers
4,500 TPS
Web Services Cluster
12 application servers
1,000 TPS
Major Travel Website
MOM/MQ
MQ/TPF
Data Service API
Figure 2: Solution architecture with a scalable cache service using Terracotta BigMemory
L2CacheLayerL1CacheLayer
Commodity
Server
Stripe
BigMemory
Java Application
App Server
BigMemory
Java Application
App Server
BigMemory
Java Application
App Server
BigMemory
Java Application
App Server
scaleup
BigMemory BigMemory BigMemory BigMemory
Active
Server
Commodity
Server
Mirror
Server
Terracotta
Server
Array
BigMemory
scale out
TCP TCP TCP TCP
Durability Mirroring Striping
Developer
Console
Plug-in
Monitoring
Operations
Center
MOM/MQ Interface
Figure 3: Detail of BigMemory’s service architecture
Get There Faster 3