How Denodo Complement’s Logical Data Lake in Cloud
● Denodo does not substitute data warehouses, data lakes,
ETLs...
● Denodo enables the use of all together plus other data
sources
○ In a logical data warehouse
○ In a logical data lake
○ They are very similar, the only difference is in the main
objective
● There are also use cases where Denodo can be used as data
source in a ETL flow
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Unlock Your Data for ML & AI using Data Virtualization
1. Unlock Your Data for ML & AI
using Data Virtualization .
Mitesh Shah
Senior Cloud Product Manager
June 20, 2019
2. 2
Source: Gartner 2018, Data Virtualization Market Guide
Through 2022, 60% of all organizations will implement
data virtualization as one key delivery style in their data
integration architecture.
3. 3
Key Challenges for Data Integration
Required expansion of Analytics
by growing consumers of data
Need for Agile
Self-Service BI
Increasing use
of third-party
data for
Information
Agility
Big Data
volumes
continue to
grow
Security and
Data Privacy
implications
becoming core
to data
strategy
Reduce or
eliminate Data
Latency
Providing data access irrespective
of Storage Location
Growth in
Hybrid &
Multi– Cloud
Deployments
Convergence
of Application
and Data
Integration
4. 4
What is Data Virtualization?
Consume
in business applications
Combine
related data into views
Connect
to disparate data sources
2
3
1
DATA CONSUMERS
DISPARATE DATA SOURCES
Enterprise Applications, Reporting, BI, Portals, ESB, Mobile, Web, Users
Databases & Warehouses, Cloud/Saas Applications, Big Data, NoSQL, Web, XML, Excel, PDF, Word...
Analytical Operational
Less StructuredMore Structured
CONNECT COMBINE PUBLISH
Multiple Protocols,
Formats
Query, Search,
Browse
Request/Reply,
Event Driven
Secure
Delivery
SQL,
MDX
Web
Services
Big Data
APIs
Web Automation
and Indexing
CONNECT COMBINE CONSUME
Share, Deliver,
Publish, Govern,
Collaborate
Discover, Transform,
Prepare, Improve
Quality, Integrate
Normalized views of
disparate data
“Data virtualization
integrates disparate
data sources in real
time or near-real
time to meet
demands for
analytics and
transactional data.”
– Create a Road Map For A
Real-time, Agile, Self-
Service Data Platform,
Forrester Research, Dec 16,
2015
6. 6
Challenges / Known Facts in Data Management!
✓ The current data landscape is fragmented.
✓ Data Lakes, IoT architectures, SaaS fuel the needs of modern analytics, ML and AI.
✓ Exploring and understanding the data available within your company is a time
consuming task.
✓ Dealing with bureaucracy, different languages and protocols is not easy.
✓ A logical architecture based on a virtualization layer connects the different systems
and exposes them as one, hiding the underlying complexity.
7. 7
Logical Architectures – Brief History
▪ Logical Architectures were first described by Mark Bayer, and analyst from Gartner,
in 2009 to describe the efforts to expand the current data warehouse architectures
▪ Since then, the term “Logical Data Warehouse” has been widely used to present the
natural evolution of analytical architectures
▪ For example, “Adopt the Logical Data Warehouse Architecture to Meet Your Modern
Analytical Needs”. Henry Cook, Gartner April 2018
▪ Other data architectures have also see their logical counterpart:
• Logical Data Marts
• Logical Data Lakes
▪ In all these cases, a virtualization layer is a key component of the architecture
8. 8
Data Lakes
A data lake is a storage repository that holds a vast
amount of raw data in its native format. The data
structure and requirements are not defined until the
data is needed
The current needs for sophisticated
data-driven intelligence and data
science favored this concept for its
simplicity and power
Hadoop and its ecosystem provided
the foundation that data lakes
required: vast storage and processing
muscle
It also favored the concept of ELT vs
ETL: load data first, (maybe)
9. 9
The Promise of Data Lakes
• Consolidate data in a single physical repository
• No more data integration issues
• Users can get the data they need from the
lake
• Store massive amounts of raw, unfiltered data
– maintain structure and fidelity of data
• Using cheap commodity hardware
• 100X cheaper than EDW appliance
• Take advantage of processing power of
Hadoop for data analysis
10. 10
…Data lakes lack semantic consistency and
governed metadata. Meeting the needs of
wider audiences require curated repositories
with governance, semantic consistency and
access controls.”
11. 11
Data Lakes – Not a Perfect World
Physical Nature
▪ Based on Replication. Data Lakes require data to be copied to its physical storage
▪ Replication extends development cycles and costs
▪ Not all data is suitable for replication
▪ Real time needs: Cloud and SaaS APIs
▪ Large volumes: existing EDW
▪ Privacy laws and restrictions
Single Purpose
▪ Usage of the data lake is often monopolized by data scientists
▪ New data silo. No clear path to share insights with business users
▪ Lacks the governance, security and quality that business users are used to (e.g. in the EDW)
12. 12
How Denodo Complement’s Logical Data Lake in Cloud
Denodo Architecture for Logical Data Lake
● Denodo does not substitute data
warehouses, data lakes, ETLs...
● Denodo enables the use of all together
plus other data sources
○ In a logical data warehouse
○ In a logical data lake
○ They are very similar, the only
difference is in the main objective
● There are also use cases where Denodo
can be used as data source in a ETL flow
13. 13
Data science project characteristics
❑ Bulk of work in data science projects involves integrating many disparate data
sets to create extremely wide data
❑ Data science data requires as many data sets as possible to be integrated in such
a way that the business context aligns with the goals of the project
❑ Data-savvy business analysts are knowledgeable with business systems’ data and
SQL but are not programmers
Extend the Reach of Data Science with Data Virtualization
14. 14
Data Lakes as a Data Scientists Playground
The early data scientists saw Hadoop
as their personal supercomputer.
Hadoop-based Data Lakes helped
democratize access to state of the art
supercomputing with off-the-shelf HW
(and later cloud)
The industry push for BI made
Hadoop–based solutions the standard
to bring modern analytics to any
corporation
15. 15
The Key Ingredient for Data Science is…Data ☺
Data Lakes has acted as a Data Scientists Playground
Input data for a data science project may come in a
variety of systems and formats. Some examples:
• Files (CSV, logs, Parquet)
• Relational databases (EDW, operational systems)
• NoSQL systems (key-value pairs, document stores,
time series, etc.)
• SaaS APIs (Salesforce, Marketo, ServiceNow,
Facebook, Twitter, etc.)
In addition, the Big Data community has also embraced
data science as one of their pillars. For example Spark
and SparkML, and architectural patterns like the Data
Lake
Typical Data Science Workflow
16. 16
Typical Data Science Workflow
80% of time – Finding and preparing the data
10% of time – Analysis
10% of time – Visualizing data
Reduce data prep time by 25% → increase data
analysis by 3X
17. 17
Where Does the Time Go?
A large amount of time and effort goes into tasks not intrinsically related to data
science:
• Finding where the right data may be
• Getting access to the data
• Bureaucracy
• Understand access methods and technology (noSQL, REST APIs, etc.)
• Transforming data into a format easy to work with
• Combining data originally available in different sources and formats
• Profile and cleanse data to eliminate incomplete or inconsistent data points
• Making this ‘data pipeline’ a repeatable, systematic process → Operationalize it
18. 18
Benefits of a Virtual Data Layer
▪ A Virtual Layer improves decision making and shortens development cycles
• Surfaces all company data from multiple repositories without the need to replicate all data
into a lake
• Eliminates data silos: allows for on-demand combination of data from multiple sources
▪ A Virtual Layer broadens usage of data
• Improves governance and metadata management to avoid “data swamps”
• Decouples data source technology. Access normalized via SQL or web services
• Allows controlled access to the data with low grain security controls
▪ A Virtual Layer offers performant access
• Leverages the processing power of the existing sources controlled by Denodo’s optimizer
• Processing of data for sources with no processing capabilities (e.g. files)
• Caching and ingestion engine to persist data when needed
19. 19
Faster Data Science from data refreshes
Machine learning model training, supervised reinforcement, and
unsupervised techniques
▪ Materialize training data from a virtual table that stores its results in another
database for machine learning supervised training
▪ Access real-time data from a virtual table for the latest data to be used in machine
learning reinforcement training
▪ Cache data sets to alleviate performance bottlenecks
20. 20
A Data Catalog and Exploration Tool?
Reporting tools are great to visualize data and
present it to business users.
But there is a gap between the reporting tool and the
data model underneath
How can end users…
• … browse tables through tags and categories ?
• … understand the lineage and definitions of the
fields?
• … search the catalog and its content?
• … validate that data is trustworthy?
22. 22
$1.5TRILLION
is the economic value of goods flowing through
our distribution centers each year, representing:
2.8%
of GDP for the 19 countries where
we do business
%2.0
of the World’s GDP
1983 100 GLOBAL 768 MSF
Founded Most sustainable corporations
$87B
Assets under management on four continents
MILLION
employees under Prologis’ roofs
1.0
Prologis - World’s leading industrial real estate company
23. 23
Step 1: Expose Data to Data Scientists
Prologis: Data Science Workflow
DATA
VIRTUALIZATION
Cache
Data Services
Application
Database
EDWCloud Data Lake
24. 24
Step 2: Operationalization of Model Scoring
Prologis: Data Science Workflow
DATA
VIRTUALIZATION
Cache
Web Service
(Python Model Scoring)
AWS Lambda
Application
Database
EDWCloud Data Lake
25. 25
Enterprise Data Services Layer @ Large Mutual Funds Company
• Problem getting consistent data – including key metrics
• Developers ‘hunting down and interpreting data themselves’
• Management decided that they needed consistent data irrespective of channels
• IT tasked with providing consistent data to all users
• Implemented Data Services Layer for all data access
• No direct access to data sources – everything is obtained through Data Virtualization
layer
• Internal reports, web sites, front office/back office apps, IVR system, etc.
27. Use Cases for Data Virtualization in Data Governance and Security
27
• Use Case 1: Single Source of
Truth to avoid data
inconsistencies, etc.
• Use Case 2: Unified Security
layer with centralized
authorization management and
auditing
• Use Case 3: Data
Catalog/Marketplace
– Single source of truth at CIT (to comply with stringent Basel III risk management regulations)
29. 29
McCormick Spice (Cont’d)
Data Services
(Data Virtualization)
API Management and Runtime
Semantics & Discovery
Governance
Security
System 1 System n
External
API $
Governance
Security
30. 30
McCormick Spice (Cont’d)
Approach
1. Model requests Specific Modifications/Full Information
2. Model incrementally or fully trains
Algorithms
Backend
Systems
External
Systems
1
Request Enterprise
Data
Services
2 Collect
train
4 3
Receive
Benefits
✓Timely Information
✓No replication
✓No need to validate information
✓Better staging for learning
31. 31
Key Takeaways
▪ A Virtual Data Lake improves decision making and
shortens development cycles
▪ Surfaces all company data without the need to replicate
▪ Eliminates data silos: allows for on-demand data access
▪ A Virtual Data Lake broadens adoption of the lake and
improves its ROI
▪ Improves governance and metadata management (avoid
“data swamps”)
▪ Faster ML models building and Allows controlled access
▪ A Virtual Data Lake offer performance for the Big Data
World
33. 33
Try it yourself
Access Denodo Platform in the Cloud!
Take Data Science Test Drive today!
www.denodo.com/TestDrive
GET STARTED TODAY
34. 34
More Resources
▪ “Rethinking the data lake” blog series
▪ http://www.datavirtualizationblog.com/rethinking-data-lake-data-virtualization/
▪ Performance
▪ Optimization and performance are always a key ingredient when dealing with large data
volumes
▪ Denodo offers the most robust and mature data virtualization engine in the market
▪ Cost based optimization
▪ Rule based optimization tailored for federation scenarios
▪ Integrated use of external MPP engines like Spark, Impala, etc.
▪ Designed to perform in big data scenarios with billion-row tables
37. 37
Query Optimizer
SELECT c.id, SUM(s.amount) as total
FROM customer c JOIN sales s
ON c.id = s.customer_id
GROUP BY c.id
How Denodo works compared with reporting tool federation engines
System Execution Time Data Transferred Optimization Technique
Denodo 9 sec. 4 M Aggregation push-down
Lead Reporting Tool 125 sec. 292 M None: full scan
290 M 2 M
Sales Customer
2 M
2 M
Sales Customer
join
group by join
group by
38. 38
Customer Centricity / MDM
✓ Complete View of Customer
Data Services
✓ Data as a Service
✓ Data Marketplace
✓ Data Services
✓ Application and Data Migration
Cloud Solutions
✓ Cloud Modernization
✓ Cloud Analytics
✓ Hybrid Data Fabric
Data Governance
✓ GRC
✓ GDPR
✓ Data Privacy / Masking
BI and Analytics
✓ Self-Service Analytics
✓ Logical Data Warehouse
✓ Enterprise Data Fabric
Big Data
✓ Logical Data Lake
✓ Data Warehouse Offloading
✓ IoT Analytics
Denodo ‘Solution’ Categories