SlideShare une entreprise Scribd logo
1  sur  27
1
Hadoop Summit 2013- June 26th, 2013
Move to Hadoop, Go Faster and Save Millions
- Mainframe Legacy Modernization
Sunilkumar Kakade – Director IT
Aashish Chandra – DVP, Legacy Modernization
2
Legacy Rides The Elephant
Hadoop is disrupting the enterprise IT processing.
3
Recognition - Contributors
• Our Leaders
• Ted Rudman
• Aashish Chandra
• Team
• Simon Thomas
• Sunil Kakade
• Susan Hsu
• Bob Pult
• Kim Havens
• Murali Nandula
• Willa Tao
• Arlene Pynadath
• Nagamani Banda
• Tushar Tanna
• Kesavan Srinivasan
4
The Enterprise Challenge
5
Mainframe Migration - Overview
• In spite of recent advances in computing, many core business
processes are batch-oriented running on mainframes.
• Annual Mainframe costs are counted in 6+ figure Dollars per
year, potentially growing with capacity needs. In order to tackle
the cost challenge, many organization have considered or
attempted multi-year mainframe migration/re-hosting
strategies.
6
Batch Processing Characteristics
*Ref:. IBM Redbook
Characteristics*
•Large amounts of input data are processed and stored (perhaps
terabytes or more).
•Large numbers of records are accessed, and a large volume of
output is produced
•Immediate response time is usually not a
requirement, however, must complete within a “batch window”
•Batch jobs are often designed to run concurrently with online
transactions with minimal resource contention.
7
Batch Processing Characteristics
Key infrastructure requirements:
•Sufficient data storage
•Available processor capacity, or cycles
•job scheduling
•Programming utilities to process basic operations
(Sort/Filter/Split/Copy/Unload etc.)
8
Why Hadoop and Why Now?
THE ADVANTAGES:
• Cost reduction
• Alleviate performance bottlenecks
• ETL too expensive and complex
• Mainframe and Data Warehouse processing  Hadoop
THE CHALLENGE:
• Traditional enterprises lack of awareness
THE SOLUTION:
• Leverage the growing support system for Hadoop
• Make Hadoop the data hub in the Enterprise
• Use Hadoop for processing batch and analytic jobs
9
The Architecture
• Enterprise solutions using Hadoop must be an eco-system
• Large companies have a complex environment:
• Transactional system
• Services
• EDW and Data marts
• Reporting tools and needs
• We needed to build an entire solution
10
MetaScale’ s Hadoop Ecosystem
11
Hadoop based Ecosystem for Legacy System Modernization
Mysql
Hbase
Hadoop
price
LEGACY -
TERADATA/DB2
product
SOLR
S
ales
C
ustom
er
Enterprise
Systems
JQUERY/AJAX
Quart
z
JAXB
REST API
JDBC/IBATIS
JBOSSJ2EE/JBOSS/SPRING
Batch Processing
HIVE
RUBY/MAPREDUCE
JBOSSHADOOP/PIG
DB2
Oracle
UDB price
Teradataproduct
MySQL
S
ales
C
ustom
erEnterprise
Systems
JQUERY/AJAX
Quart
z
JAXB
REST API
JDBC/IBATIS
JBOSSJ2EE/WebSphere
Mainframe Batch Processing
VSAM
JBOSSCOBOL/JCL
MetaScale
12
Mainframe Batch Processing Architecture
Mainframe Batch Processing Architecture
User Interface Data Sources
Batch
Processing
Datawarehouse
Input
Resultant
Data
Resultant
Data
Historical
Data Sources
Input
Data
Retention
External Systems
Resultant
Data
Input
13
MetaScale Batch Processing Architecture With Hadoop
Hadoop EcoSystem
User Interface Data Sources
Hadoop EcoSystem
Map Reduce based
Batch Processing
External
Systems/
Datawarehouse
Input
Move to Hadoop
Resultant
Data
Move to Non-Hadoop
Resultant Data
Move to Non-Hadoop platform
Datawarehouse
Resultant Data
14
Typical Batch Processing Units (JCL) on Mainframe
Batch Processing - JOB FLOW
JCL1 - APPLICATION 1
Mainframe Batch Processing Flow
User Interface Data Sources
Batch
Processing
External
Systems/
Datawarehouse
Input
Resultant Data Resultant Data
SORT Input SPLIT
Input
SORT
Input COBOL
Input FILTER
Input FORMAT
JCL2 - APPLICATION 1
JCL3 - APPLICATION 2
LOAD TO DATABASE
COPY Input COBOL Input FORMAT
Input
Input
15
Batch Processing Migration With Hadoop
Seamless migration of high MIPS processing jobs with no application alteration
Commodity Hardware Based Software Framework
Batch Processing - JOB FLOW
Batch Process - APPLICATION 1
Batch Processing - JOB FLOW - Legacy Platform
Invention - Migration methodology for Legacy Applications to Commodity Hardware
User Interface Data Sources
External
Systems/
Datawarehouse
Batch
Processing
Input Resultant Data
PIG/MR Input PIG/MR
Input
PIG/MR
Input PIG/MR
Input PIG/MR
Input PIG/MR
JCL2 - APPLICATION 1
JCL3 - APPLICATION 2
LOAD TO DATABASE
COPY Input COBOL Input FORMAT
Input
Input
Resultant Data
16
Mainframe to Hadoop-PIG conversion example
Mainframe JCL
//PZHDC110 EXEC PGM=SORT
//SORTIN DD DSN=PZ.THDC100.PLMP.PRC,
// DISP=(OLD,DELETE,KEEP)
//SORTOUT DD
DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000,
// DISP=(,CATLG,DELETE),
// UNIT=CART,
// VOL=(,RETAIN),
// RECFM=FB,LRECL=40
//SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A),
// DISP=SHR
//SYSOUT DD SYSOUT=V
//SYSUDUMP DD SYSOUT=D
//*__________________________________________________
//* SORT FIELDS=(1,9,CH,A)
- 500 Million Records sort took 45 minutes of clock time
on A168 mainframe
PIG
a = LOAD 'data' AS f1:char;
b = ORDER a BY f1;
- 500 Million Records sort took
less than 2 minutes
More benchmarking studies in
progress
17
Mainframe to Hadoop-PIG conversion example
Mainframe JCL
//PZHDC110 EXEC PGM=SORT
//SORTIN DD DSN=PZ.THDC100.PLMP.PRC,
// DISP=(OLD,DELETE,KEEP)
//SORTOUT DD
DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000,
// DISP=(,CATLG,DELETE),
// UNIT=CART,
// VOL=(,RETAIN),
// RECFM=FB,LRECL=40
//SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A),
// DISP=SHR
//SYSOUT DD SYSOUT=V
//SYSUDUMP DD SYSOUT=D
//*__________________________________________________
//* SORT FIELDS=(1,9,CH,A)
- 500 Million Records sort took 45 minutes of clock time
on A168 mainframe
PIG
a = LOAD 'data' AS f1:char;
b = ORDER a BY f1;
- 500 Million Records sort took
less than 2 minutes
More benchmarking studies in
progress
18
Mainframe Migration – Value Proposition
Mainframe
Migration
Optimize
PiG /
Hadoop
Rewrites
Convert
High
TCO
Resource
Crunch
Inert
Business
Practices
Mainframe ONLINE
-Tool based
Conversion
-Convert COBOL &
JCL to Java
Mainframe Optimization:
-5% ~ 10% MIPS
Reduction
-Quick Wins with Low
hanging fruits
Mainframe BATCH
-ETL Modernization
-Move Batch Processing
to Hadoop
Cost Savings
Open Source
Platform
Simpler &
Easier Code
Business
Agility
Business & IT
Transformation
Modernized
Systems
IT Efficiencies
Companies can SAVE
60% ~ 80% of their
Mainframe Costs with
Modernization
Typically 60% ~ 65%
of MIPS are used in
Mainframes by
BATCH processing
Estimated 45% of
FUNCTIONALITY
in mainframes is
never used
19
Mainframe Migration – Traditional Approach
• Traditional approaches to mainframe elimination call for
large initial investments and carry significant risks – It is
hard to match Mainframe performance and reliability.
• Many organizations still utilize mainframe for batch
processing applications. Several solutions presented to
move expensive mainframe computing to other distributed
proprietary platform, most of them rely on end-to-end
migration of applications.
20
Mainframe Batch Processing MetaScale Architecture
• Using Hadoop, Sears/MetaScale developed an innovative
alternative that enables batch processing migration to
Hadoop Ecosystem, without the risks, time and costs of
other methods.
• The solution has been adopted in multiple businesses with
excellent results and associated cost savings, as
Mainframes are physically eliminated or downsized:
Millions of dollars in savings based on MIP reductions have
been seen.
21
MetaScale Mainframe Migration Methodology
Implement a
Hadoop-centric
reference
architecture
Move enterprise
batch
processing to
Hadoop
Make Hadoop
the single point
of truth
Massively
reduce ETL by
transforming
within Hadoop
Move results
and aggregates
back to legacy
systems for
consumption
Retain, within
Hadoop, source
files at the finest
granularity for
re-use
1 2 3 4 5 6
Key to our Approach:
1) allowing users to continue to use familiar consumption interfaces
2) providing inherent HA
3) enabling businesses to unlock previously unusable data
22
Mainframe Migration - Benefits
“MetaScale
is the market
leader in moving
mainframe batch
processing to
Hadoop”
• Readily available resources
& commodity skills
• Access to latest technologies
• IT Operational Efficiencies
• Moved 7000 lines of COBOL
code to under 50 lines in PiG
• Ancient systems no longer
bottleneck for business
• Faster time to Market
• Mission critical “Item
Master” application in
COBOL/JCL being converted
by our tool in Java (JOBOL)
• Modernized
COBOL, JCL, DB2, VSAM, IMS
& so on
• Reduced batch processing in
COBOL/JCL from over 6 hrs
to less than 10 min in PiG
Latin on Hadoop
• Simpler, and easily
maintainable code
• Massively Parallel Processing
• Significant reduction in ISV
costs & mainframe software
licenses fees
• Open Source platform
• Saved ~ $2MM annually
within 13 weeks by MIPS
Optimization efforts
• Reduced 1000+ MIPS by
moving batch processing to
Hadoop
Cost
Savings
Transform
I.T.
Skills &
Resources
Business
Agility
23
Summary
• Hadoop can revolutionize Enterprise workload and make business
agile
• Can reduce strain on legacy platforms
• Can reduce cost
• Can bring new business opportunities
• Must be an eco-system
• Must be part of an data overall strategy
• Not to be underestimated
24
The Learning
HADOOP
 We can dramatically reduce batch processing times for mainframe and EDW
 We can retain and analyze data at a much more granular level, with longer history
 Hadoop must be part of an overall solution and eco-system
IMPLEMENTATION
 We can reliably meet our production deliverable time-windows by using Hadoop
 We can largely eliminate the use of traditional ETL tools
 New Tools allow improved user experience on very large data sets
UNIQUE
VALUE
 We developed tools and skills – The learning curve is not to be underestimated
 We developed experience in moving workload from expensive, proprietary mainframe and EDW
platforms to Hadoop with spectacular results
Over two years of Hadoop experience using Hadoop for Enterprise legacy workload.
25
• Automation tools and techniques that ease the Enterprise integration of
Hadoop
• Educate traditional Enterprise IT organizations about the possibilities and
reasons to deploy Hadoop
• Continue development of a reusable framework for legacy workload
migration
The Horizon – What do we need next?
26
Legacy Modernization Service Offerings
• Leveraging our patent pending and award-winning niche` products, we reduce
Mainframe MIPS, Modernize ETL processing and transform business and IT
organizations to open source, cloud based, Big Data and agile platform
• MetaScale Legacy Modernization offers following services –
 Legacy Modernization Assessment
Services
 Mainframe Migration Services
• MIPS Reduction Services
• Mainframe Application Migration
 Legacy Distributed Modernization
• ETL Modernization Services
• Modernize Proprietary Systems and
Databases
 Managed Applications Support
 Support Transition Services
27
Formore
information,visit:
www.metascale.com
Follow us on Twitter @LegacyModernizationMadeEasy
Join us on LinkedIn: www.linkedin.com/company/metascale-llc
Legacy Modernization Made Easy!

Contenu connexe

Tendances

How Insurance Companies Use MongoDB
How Insurance Companies Use MongoDB How Insurance Companies Use MongoDB
How Insurance Companies Use MongoDB MongoDB
 
Snowflake SnowPro Certification Exam Cheat Sheet
Snowflake SnowPro Certification Exam Cheat SheetSnowflake SnowPro Certification Exam Cheat Sheet
Snowflake SnowPro Certification Exam Cheat SheetJeno Yamma
 
Data platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxData platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxCalvinSim10
 
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...DataWorks Summit
 
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...Databricks
 
SAP NetWeaver BW Powered by SAP HANA
SAP NetWeaver BW Powered by SAP HANASAP NetWeaver BW Powered by SAP HANA
SAP NetWeaver BW Powered by SAP HANASAP Technology
 
Introducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data WarehouseIntroducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data WarehouseSnowflake Computing
 
SAP overview.pptx
SAP overview.pptxSAP overview.pptx
SAP overview.pptxasgharhaghi
 
Deploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesDeploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesPetteriTeikariPhD
 
Hardware planning & sizing for sql server
Hardware planning & sizing for sql serverHardware planning & sizing for sql server
Hardware planning & sizing for sql serverDavide Mauri
 
Introduction to Greenplum
Introduction to GreenplumIntroduction to Greenplum
Introduction to GreenplumDave Cramer
 
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...Databricks
 
Demystifying data engineering
Demystifying data engineeringDemystifying data engineering
Demystifying data engineeringThang Bui (Bob)
 
Moving to the cloud: cloud strategies and roadmaps
Moving to the cloud: cloud strategies and roadmapsMoving to the cloud: cloud strategies and roadmaps
Moving to the cloud: cloud strategies and roadmapsJisc
 

Tendances (20)

How Insurance Companies Use MongoDB
How Insurance Companies Use MongoDB How Insurance Companies Use MongoDB
How Insurance Companies Use MongoDB
 
adb.pdf
adb.pdfadb.pdf
adb.pdf
 
Hadoop seminar
Hadoop seminarHadoop seminar
Hadoop seminar
 
Sap activate overview
Sap activate overviewSap activate overview
Sap activate overview
 
Snowflake SnowPro Certification Exam Cheat Sheet
Snowflake SnowPro Certification Exam Cheat SheetSnowflake SnowPro Certification Exam Cheat Sheet
Snowflake SnowPro Certification Exam Cheat Sheet
 
Data platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxData platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptx
 
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...
Hive LLAP: A High Performance, Cost-effective Alternative to Traditional MPP ...
 
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
 
SAP NetWeaver BW Powered by SAP HANA
SAP NetWeaver BW Powered by SAP HANASAP NetWeaver BW Powered by SAP HANA
SAP NetWeaver BW Powered by SAP HANA
 
Introducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data WarehouseIntroducing the Snowflake Computing Cloud Data Warehouse
Introducing the Snowflake Computing Cloud Data Warehouse
 
SAP overview.pptx
SAP overview.pptxSAP overview.pptx
SAP overview.pptx
 
Sap Cloud Migration
Sap Cloud MigrationSap Cloud Migration
Sap Cloud Migration
 
Modern Data Architecture
Modern Data ArchitectureModern Data Architecture
Modern Data Architecture
 
Deploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesDeploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and Kubernetes
 
Hardware planning & sizing for sql server
Hardware planning & sizing for sql serverHardware planning & sizing for sql server
Hardware planning & sizing for sql server
 
Introduction to Greenplum
Introduction to GreenplumIntroduction to Greenplum
Introduction to Greenplum
 
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...
The Modern Data Team for the Modern Data Stack: dbt and the Role of the Analy...
 
Demystifying data engineering
Demystifying data engineeringDemystifying data engineering
Demystifying data engineering
 
Introduction to SLURM
Introduction to SLURMIntroduction to SLURM
Introduction to SLURM
 
Moving to the cloud: cloud strategies and roadmaps
Moving to the cloud: cloud strategies and roadmapsMoving to the cloud: cloud strategies and roadmaps
Moving to the cloud: cloud strategies and roadmaps
 

En vedette

The Do's and Don'ts of Mainframe Modernization
The Do's and Don'ts of Mainframe ModernizationThe Do's and Don'ts of Mainframe Modernization
The Do's and Don'ts of Mainframe ModernizationCompuware
 
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...dkang
 
Cutting-edge Solutions with Mainframe Services
Cutting-edge Solutions with Mainframe ServicesCutting-edge Solutions with Mainframe Services
Cutting-edge Solutions with Mainframe ServicesQAT Global
 
NRB - BE MAINFRAME DAY 2017 - Case Study
NRB - BE MAINFRAME DAY 2017 - Case StudyNRB - BE MAINFRAME DAY 2017 - Case Study
NRB - BE MAINFRAME DAY 2017 - Case StudyNRB
 
Mainframe modernization powered by AI
Mainframe modernization powered by AIMainframe modernization powered by AI
Mainframe modernization powered by AIInfosys
 
NRB - BE MAINFRAME DAY 2017 - Z strategy
NRB - BE MAINFRAME DAY 2017 - Z strategyNRB - BE MAINFRAME DAY 2017 - Z strategy
NRB - BE MAINFRAME DAY 2017 - Z strategyNRB
 

En vedette (6)

The Do's and Don'ts of Mainframe Modernization
The Do's and Don'ts of Mainframe ModernizationThe Do's and Don'ts of Mainframe Modernization
The Do's and Don'ts of Mainframe Modernization
 
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
 
Cutting-edge Solutions with Mainframe Services
Cutting-edge Solutions with Mainframe ServicesCutting-edge Solutions with Mainframe Services
Cutting-edge Solutions with Mainframe Services
 
NRB - BE MAINFRAME DAY 2017 - Case Study
NRB - BE MAINFRAME DAY 2017 - Case StudyNRB - BE MAINFRAME DAY 2017 - Case Study
NRB - BE MAINFRAME DAY 2017 - Case Study
 
Mainframe modernization powered by AI
Mainframe modernization powered by AIMainframe modernization powered by AI
Mainframe modernization powered by AI
 
NRB - BE MAINFRAME DAY 2017 - Z strategy
NRB - BE MAINFRAME DAY 2017 - Z strategyNRB - BE MAINFRAME DAY 2017 - Z strategy
NRB - BE MAINFRAME DAY 2017 - Z strategy
 

Similaire à Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy Modernization

Practical introduction to hadoop
Practical introduction to hadoopPractical introduction to hadoop
Practical introduction to hadoopinside-BigData.com
 
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...VMware Tanzu
 
Experimentation Platform on Hadoop
Experimentation Platform on HadoopExperimentation Platform on Hadoop
Experimentation Platform on HadoopDataWorks Summit
 
eBay Experimentation Platform on Hadoop
eBay Experimentation Platform on HadoopeBay Experimentation Platform on Hadoop
eBay Experimentation Platform on HadoopTony Ng
 
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdHadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdIBM Analytics
 
IMS01 IMS Keynote
IMS01   IMS KeynoteIMS01   IMS Keynote
IMS01 IMS KeynoteRobert Hain
 
Accelerating Big Data Analytics
Accelerating Big Data AnalyticsAccelerating Big Data Analytics
Accelerating Big Data AnalyticsAttunity
 
IT Modernization in Practice
IT Modernization in PracticeIT Modernization in Practice
IT Modernization in PracticeTom Diederich
 
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the Cloud
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudBring Your SAP and Enterprise Data to Hadoop, Kafka, and the Cloud
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudDataWorks Summit
 
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...MapR Technologies
 
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflows
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflowsCloud nativecomputingtechnologysupportinghpc cognitiveworkflows
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflowsYong Feng
 
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...Global Business Events
 
Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Group
 
Hadoop and SQL: Delivery Analytics Across the Organization
Hadoop and SQL:  Delivery Analytics Across the OrganizationHadoop and SQL:  Delivery Analytics Across the Organization
Hadoop and SQL: Delivery Analytics Across the OrganizationSeeling Cheung
 
Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Precisely
 
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...Anand Haridass
 

Similaire à Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy Modernization (20)

Practical introduction to hadoop
Practical introduction to hadoopPractical introduction to hadoop
Practical introduction to hadoop
 
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...
Learn How Dell Improved Postgres/Greenplum Performance 20x with a Database Pr...
 
Retail & CPG
Retail & CPGRetail & CPG
Retail & CPG
 
Experimentation Platform on Hadoop
Experimentation Platform on HadoopExperimentation Platform on Hadoop
Experimentation Platform on Hadoop
 
eBay Experimentation Platform on Hadoop
eBay Experimentation Platform on HadoopeBay Experimentation Platform on Hadoop
eBay Experimentation Platform on Hadoop
 
ESGYN Overview
ESGYN OverviewESGYN Overview
ESGYN Overview
 
Hadoop and Your Enterprise Data Warehouse
Hadoop and Your Enterprise Data WarehouseHadoop and Your Enterprise Data Warehouse
Hadoop and Your Enterprise Data Warehouse
 
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdHadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
 
IMS01 IMS Keynote
IMS01   IMS KeynoteIMS01   IMS Keynote
IMS01 IMS Keynote
 
Accelerating Big Data Analytics
Accelerating Big Data AnalyticsAccelerating Big Data Analytics
Accelerating Big Data Analytics
 
IT Modernization in Practice
IT Modernization in PracticeIT Modernization in Practice
IT Modernization in Practice
 
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the Cloud
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudBring Your SAP and Enterprise Data to Hadoop, Kafka, and the Cloud
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the Cloud
 
Skilwise Big data
Skilwise Big dataSkilwise Big data
Skilwise Big data
 
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...
Hadoop in 2015: Keys to Achieving Operational Excellence for the Real-Time En...
 
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflows
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflowsCloud nativecomputingtechnologysupportinghpc cognitiveworkflows
Cloud nativecomputingtechnologysupportinghpc cognitiveworkflows
 
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...
Justin Sheppard & Ankur Gupta from Sears Holdings Corporation - Single point ...
 
Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Big Data part 2
Skillwise Big Data part 2
 
Hadoop and SQL: Delivery Analytics Across the Organization
Hadoop and SQL:  Delivery Analytics Across the OrganizationHadoop and SQL:  Delivery Analytics Across the Organization
Hadoop and SQL: Delivery Analytics Across the Organization
 
Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?
 
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
 

Plus de DataWorks Summit

Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisDataWorks Summit
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiDataWorks Summit
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...DataWorks Summit
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...DataWorks Summit
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal SystemDataWorks Summit
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExampleDataWorks Summit
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberDataWorks Summit
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiDataWorks Summit
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureDataWorks Summit
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...DataWorks Summit
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudDataWorks Summit
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerDataWorks Summit
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouDataWorks Summit
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkDataWorks Summit
 

Plus de DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Dernier

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 

Dernier (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 

Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy Modernization

  • 1. 1 Hadoop Summit 2013- June 26th, 2013 Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy Modernization Sunilkumar Kakade – Director IT Aashish Chandra – DVP, Legacy Modernization
  • 2. 2 Legacy Rides The Elephant Hadoop is disrupting the enterprise IT processing.
  • 3. 3 Recognition - Contributors • Our Leaders • Ted Rudman • Aashish Chandra • Team • Simon Thomas • Sunil Kakade • Susan Hsu • Bob Pult • Kim Havens • Murali Nandula • Willa Tao • Arlene Pynadath • Nagamani Banda • Tushar Tanna • Kesavan Srinivasan
  • 5. 5 Mainframe Migration - Overview • In spite of recent advances in computing, many core business processes are batch-oriented running on mainframes. • Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organization have considered or attempted multi-year mainframe migration/re-hosting strategies.
  • 6. 6 Batch Processing Characteristics *Ref:. IBM Redbook Characteristics* •Large amounts of input data are processed and stored (perhaps terabytes or more). •Large numbers of records are accessed, and a large volume of output is produced •Immediate response time is usually not a requirement, however, must complete within a “batch window” •Batch jobs are often designed to run concurrently with online transactions with minimal resource contention.
  • 7. 7 Batch Processing Characteristics Key infrastructure requirements: •Sufficient data storage •Available processor capacity, or cycles •job scheduling •Programming utilities to process basic operations (Sort/Filter/Split/Copy/Unload etc.)
  • 8. 8 Why Hadoop and Why Now? THE ADVANTAGES: • Cost reduction • Alleviate performance bottlenecks • ETL too expensive and complex • Mainframe and Data Warehouse processing  Hadoop THE CHALLENGE: • Traditional enterprises lack of awareness THE SOLUTION: • Leverage the growing support system for Hadoop • Make Hadoop the data hub in the Enterprise • Use Hadoop for processing batch and analytic jobs
  • 9. 9 The Architecture • Enterprise solutions using Hadoop must be an eco-system • Large companies have a complex environment: • Transactional system • Services • EDW and Data marts • Reporting tools and needs • We needed to build an entire solution
  • 11. 11 Hadoop based Ecosystem for Legacy System Modernization Mysql Hbase Hadoop price LEGACY - TERADATA/DB2 product SOLR S ales C ustom er Enterprise Systems JQUERY/AJAX Quart z JAXB REST API JDBC/IBATIS JBOSSJ2EE/JBOSS/SPRING Batch Processing HIVE RUBY/MAPREDUCE JBOSSHADOOP/PIG DB2 Oracle UDB price Teradataproduct MySQL S ales C ustom erEnterprise Systems JQUERY/AJAX Quart z JAXB REST API JDBC/IBATIS JBOSSJ2EE/WebSphere Mainframe Batch Processing VSAM JBOSSCOBOL/JCL MetaScale
  • 12. 12 Mainframe Batch Processing Architecture Mainframe Batch Processing Architecture User Interface Data Sources Batch Processing Datawarehouse Input Resultant Data Resultant Data Historical Data Sources Input Data Retention External Systems Resultant Data Input
  • 13. 13 MetaScale Batch Processing Architecture With Hadoop Hadoop EcoSystem User Interface Data Sources Hadoop EcoSystem Map Reduce based Batch Processing External Systems/ Datawarehouse Input Move to Hadoop Resultant Data Move to Non-Hadoop Resultant Data Move to Non-Hadoop platform Datawarehouse Resultant Data
  • 14. 14 Typical Batch Processing Units (JCL) on Mainframe Batch Processing - JOB FLOW JCL1 - APPLICATION 1 Mainframe Batch Processing Flow User Interface Data Sources Batch Processing External Systems/ Datawarehouse Input Resultant Data Resultant Data SORT Input SPLIT Input SORT Input COBOL Input FILTER Input FORMAT JCL2 - APPLICATION 1 JCL3 - APPLICATION 2 LOAD TO DATABASE COPY Input COBOL Input FORMAT Input Input
  • 15. 15 Batch Processing Migration With Hadoop Seamless migration of high MIPS processing jobs with no application alteration Commodity Hardware Based Software Framework Batch Processing - JOB FLOW Batch Process - APPLICATION 1 Batch Processing - JOB FLOW - Legacy Platform Invention - Migration methodology for Legacy Applications to Commodity Hardware User Interface Data Sources External Systems/ Datawarehouse Batch Processing Input Resultant Data PIG/MR Input PIG/MR Input PIG/MR Input PIG/MR Input PIG/MR Input PIG/MR JCL2 - APPLICATION 1 JCL3 - APPLICATION 2 LOAD TO DATABASE COPY Input COBOL Input FORMAT Input Input Resultant Data
  • 16. 16 Mainframe to Hadoop-PIG conversion example Mainframe JCL //PZHDC110 EXEC PGM=SORT //SORTIN DD DSN=PZ.THDC100.PLMP.PRC, // DISP=(OLD,DELETE,KEEP) //SORTOUT DD DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000, // DISP=(,CATLG,DELETE), // UNIT=CART, // VOL=(,RETAIN), // RECFM=FB,LRECL=40 //SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A), // DISP=SHR //SYSOUT DD SYSOUT=V //SYSUDUMP DD SYSOUT=D //*__________________________________________________ //* SORT FIELDS=(1,9,CH,A) - 500 Million Records sort took 45 minutes of clock time on A168 mainframe PIG a = LOAD 'data' AS f1:char; b = ORDER a BY f1; - 500 Million Records sort took less than 2 minutes More benchmarking studies in progress
  • 17. 17 Mainframe to Hadoop-PIG conversion example Mainframe JCL //PZHDC110 EXEC PGM=SORT //SORTIN DD DSN=PZ.THDC100.PLMP.PRC, // DISP=(OLD,DELETE,KEEP) //SORTOUT DD DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000, // DISP=(,CATLG,DELETE), // UNIT=CART, // VOL=(,RETAIN), // RECFM=FB,LRECL=40 //SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A), // DISP=SHR //SYSOUT DD SYSOUT=V //SYSUDUMP DD SYSOUT=D //*__________________________________________________ //* SORT FIELDS=(1,9,CH,A) - 500 Million Records sort took 45 minutes of clock time on A168 mainframe PIG a = LOAD 'data' AS f1:char; b = ORDER a BY f1; - 500 Million Records sort took less than 2 minutes More benchmarking studies in progress
  • 18. 18 Mainframe Migration – Value Proposition Mainframe Migration Optimize PiG / Hadoop Rewrites Convert High TCO Resource Crunch Inert Business Practices Mainframe ONLINE -Tool based Conversion -Convert COBOL & JCL to Java Mainframe Optimization: -5% ~ 10% MIPS Reduction -Quick Wins with Low hanging fruits Mainframe BATCH -ETL Modernization -Move Batch Processing to Hadoop Cost Savings Open Source Platform Simpler & Easier Code Business Agility Business & IT Transformation Modernized Systems IT Efficiencies Companies can SAVE 60% ~ 80% of their Mainframe Costs with Modernization Typically 60% ~ 65% of MIPS are used in Mainframes by BATCH processing Estimated 45% of FUNCTIONALITY in mainframes is never used
  • 19. 19 Mainframe Migration – Traditional Approach • Traditional approaches to mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability. • Many organizations still utilize mainframe for batch processing applications. Several solutions presented to move expensive mainframe computing to other distributed proprietary platform, most of them rely on end-to-end migration of applications.
  • 20. 20 Mainframe Batch Processing MetaScale Architecture • Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop Ecosystem, without the risks, time and costs of other methods. • The solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen.
  • 21. 21 MetaScale Mainframe Migration Methodology Implement a Hadoop-centric reference architecture Move enterprise batch processing to Hadoop Make Hadoop the single point of truth Massively reduce ETL by transforming within Hadoop Move results and aggregates back to legacy systems for consumption Retain, within Hadoop, source files at the finest granularity for re-use 1 2 3 4 5 6 Key to our Approach: 1) allowing users to continue to use familiar consumption interfaces 2) providing inherent HA 3) enabling businesses to unlock previously unusable data
  • 22. 22 Mainframe Migration - Benefits “MetaScale is the market leader in moving mainframe batch processing to Hadoop” • Readily available resources & commodity skills • Access to latest technologies • IT Operational Efficiencies • Moved 7000 lines of COBOL code to under 50 lines in PiG • Ancient systems no longer bottleneck for business • Faster time to Market • Mission critical “Item Master” application in COBOL/JCL being converted by our tool in Java (JOBOL) • Modernized COBOL, JCL, DB2, VSAM, IMS & so on • Reduced batch processing in COBOL/JCL from over 6 hrs to less than 10 min in PiG Latin on Hadoop • Simpler, and easily maintainable code • Massively Parallel Processing • Significant reduction in ISV costs & mainframe software licenses fees • Open Source platform • Saved ~ $2MM annually within 13 weeks by MIPS Optimization efforts • Reduced 1000+ MIPS by moving batch processing to Hadoop Cost Savings Transform I.T. Skills & Resources Business Agility
  • 23. 23 Summary • Hadoop can revolutionize Enterprise workload and make business agile • Can reduce strain on legacy platforms • Can reduce cost • Can bring new business opportunities • Must be an eco-system • Must be part of an data overall strategy • Not to be underestimated
  • 24. 24 The Learning HADOOP  We can dramatically reduce batch processing times for mainframe and EDW  We can retain and analyze data at a much more granular level, with longer history  Hadoop must be part of an overall solution and eco-system IMPLEMENTATION  We can reliably meet our production deliverable time-windows by using Hadoop  We can largely eliminate the use of traditional ETL tools  New Tools allow improved user experience on very large data sets UNIQUE VALUE  We developed tools and skills – The learning curve is not to be underestimated  We developed experience in moving workload from expensive, proprietary mainframe and EDW platforms to Hadoop with spectacular results Over two years of Hadoop experience using Hadoop for Enterprise legacy workload.
  • 25. 25 • Automation tools and techniques that ease the Enterprise integration of Hadoop • Educate traditional Enterprise IT organizations about the possibilities and reasons to deploy Hadoop • Continue development of a reusable framework for legacy workload migration The Horizon – What do we need next?
  • 26. 26 Legacy Modernization Service Offerings • Leveraging our patent pending and award-winning niche` products, we reduce Mainframe MIPS, Modernize ETL processing and transform business and IT organizations to open source, cloud based, Big Data and agile platform • MetaScale Legacy Modernization offers following services –  Legacy Modernization Assessment Services  Mainframe Migration Services • MIPS Reduction Services • Mainframe Application Migration  Legacy Distributed Modernization • ETL Modernization Services • Modernize Proprietary Systems and Databases  Managed Applications Support  Support Transition Services
  • 27. 27 Formore information,visit: www.metascale.com Follow us on Twitter @LegacyModernizationMadeEasy Join us on LinkedIn: www.linkedin.com/company/metascale-llc Legacy Modernization Made Easy!

Notes de l'éditeur

  1. Batch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.
  2. Batch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.
  3. Batch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.