SlideShare une entreprise Scribd logo
1  sur  64
Télécharger pour lire hors ligne
info@rittmanmead.com www.rittmanmead.com @rittmanmead
SQL and Data Integration Futures on Hadoop :
A look into the future of analytics on Oracle BDA
Mark Rittman, CTO, Rittman Mead
Enkitec E4, Barcelona, April 2016
info@rittmanmead.com www.rittmanmead.com @rittmanmead 2
•Mark Rittman, Co-Founder of Rittman Mead

‣Oracle ACE Director, specialising in Oracle BI&DW

‣14 Years Experience with Oracle Technology

‣Regular columnist for Oracle Magazine

•Author of two Oracle Press Oracle BI books

‣Oracle Business Intelligence Developers Guide

‣Oracle Exalytics Revealed

‣Writer for Rittman Mead Blog :

http://www.rittmanmead.com/blog

•Email : mark.rittman@rittmanmead.com

•Twitter : @markrittman
About the Speaker
info@rittmanmead.com www.rittmanmead.com @rittmanmead 3
•Started back in 1997 on a bank Oracle DW project

•Our tools were Oracle 7.3.4, SQL*Plus, PL/SQL 

and shell scripts

•Went on to use Oracle Developer/2000 and Designer/2000

•Our initial users queried the DW using SQL*Plus

•And later on, we rolled-out Discoverer/2000 to everyone else

•And life was fun…
15+ Years in Oracle BI and Data Warehousing
info@rittmanmead.com www.rittmanmead.com @rittmanmead 4
•Over time, this data warehouse architecture developed

•Added Oracle Warehouse Builder to 

automate and model the DW build

•Oracle 9i Application Server (yay!) 

to deliver reports and web portals

•Data Mining and OLAP in the database

•Oracle 9i for in-database ETL (and RAC)

•Data was typically loaded from 

Oracle RBDMS and EBS

•It was turtles Oracle all the way down…
The Oracle-Centric DW Architecture
info@rittmanmead.com www.rittmanmead.com @rittmanmead
And then came "Big Data"...
info@rittmanmead.com www.rittmanmead.com @rittmanmead 6
•Many customers and organisations are now running initiatives around “big data”

•Some are IT-led and are looking for cost-savings around data warehouse storage + ETL

•Others are “skunkworks” projects in the marketing department that are now scaling-up

•Projects now emerging from pilot exercises

•And design patterns starting to emerge
Many Organisations are Running Big Data Initiatives
info@rittmanmead.com www.rittmanmead.com @rittmanmead
Highly Scalable (and Affordable) Cluster Computing
•Enterprise High-End RDBMSs such as Oracle can scale into the petabytes, using clustering

‣Sharded databases (e.g. Netezza) can scale further but with complexity / single workload trade-offs

•Hadoop was designed from outside for massive horizontal scalability - using cheap hardware

•Anticipates hardware failure and makes multiple copies of data as protection

•More nodes you add, more stable it becomes

•And at a fraction of the cost of traditional

RDBMS platforms
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Store and analyze huge volumes of structured and unstructured data 

•In the past, we had to throw away the detail

•No need to define a data model during ingest

•Supports multiple, flexible schemas

•Separation of storage from compute engine

•Allows multiple query engines and frameworks

to work on the same raw datasets
Store Everything Forever - And Process in Many Ways
Hadoop	Data	Lake
Webserver

Log	Files	(txt)
Social	Media

Logs	(JSON)
DB	Archives

(CSV)
Sensor	Data

(XML)
`Spatial	&	Graph

(XML,	txt)
IoT	Logs

(JSON,	txt)
Chat	Transcripts

(Txt)
DB	Transactions

(CSV,	XML)
Blogs,	Articles

(TXT,	HTML)
Raw	Data Processed	Data
NoSQL	Key-Value

Store	DB Tabular	Data

(Hive	Tables)
Aggregates

(Impala	Tables) NoSQL	Document	

Store	DB
info@rittmanmead.com www.rittmanmead.com @rittmanmead 9
•Typical implementation of Hadoop and big data in an analytic context is the “data lake”

•Additional data storage platform with cheap storage, flexible schema support + compute

•Data lands in the data lake or reservoir in raw form, then minimally processed

•Data then accessed directly by “data scientists”, or processed further into DW
In the Context of BI & Analytics : The Data Reservoir
Data	Transfer Data	Access
Data	Factory
Data	Reservoir
Business	
Intelligence	Tools
Hadoop	Platform
File	Based	
Integration
Stream	
Based	
Integration
Data	streams
Discovery	&	Development	Labs
Safe	&	secure	Discovery	and	Development	
environment
Data	sets	and	
samples
Models	and	
programs
Marketing	/
Sales	Applications
Models
Machine
Learning
Segments
Operational	Data
Transactions
Customer
Master	ata
Unstructured	Data
Voice	+	Chat	
Transcripts
ETL	Based
Integration
Raw	
Customer	Data
Data	stored	in	
the	original	
format	(usually	
files)		such	as	
SS7,	ASN.1,	
JSON	etc.
Mapped	
Customer	Data
Data	sets	
produced	by	
mapping	and	
transforming	
raw	data
info@rittmanmead.com www.rittmanmead.com @rittmanmead 10
•Oracle Engineered system for big data processing and analysis

•Start with Oracle Big Data Appliance Starter Rack - expand up to 18 nodes per rack

•Cluster racks together for horizontal scale-out using enterprise-quality infrastructure
Oracle	Big	Data	Appliance	
Starter	Rack	+	Expansion	
• Cloudera	CDH	+	Oracle	software		
• 18	High-spec	Hadoop	Nodes	with	
InfiniBand	switches	for	internal	Hadoop	
traffic,	optimised	for	network	throughput	
• 1	Cisco	Management	Switch	
• Single	place	for	support	for	H/W	+	S/W

Deployed on Oracle Big Data Appliance
Oracle	Big	Data	Appliance	
Starter	Rack	+	Expansion	
• Cloudera	CDH	+	Oracle	software		
• 18	High-spec	Hadoop	Nodes	with	
InfiniBand	switches	for	internal	Hadoop	
traffic,	optimised	for	network	throughput	
• 1	Cisco	Management	Switch	
• Single	place	for	support	for	H/W	+	S/W

Enriched	

Customer	Profile
Modeling
Scoring
Infiniband
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Programming model for processing large data sets in parallel on a cluster

•Not specific to a particular language, but usually written in Java

•Inspired by the map and reduce functions commonly used in functional programming

‣Map() performs filtering and sorting

‣Reduce() aggregates the output of mappers

‣and a Shuffle() step to redistribute output by keys

•Resolved several complications of distributed computing:

‣Allows unlimited computations on unlimited data

‣Map and reduce functions can be easily distributed

‣Originated at Google; Hadoop was Yahoo’s open-source

implementation of MapReduce, + two are synonymous
MapReduce - The Original Big Data Query Framework
Mapper
Filter, Project
Mapper
Filter, Project
Mapper
Filter, Project
Reducer
Aggregate
Reducer
Aggregate
Output

One HDFS file per reducer,

in a directory
info@rittmanmead.com www.rittmanmead.com @rittmanmead 12
•Original developed at Facebook, now foundational within the Hadoop project

•Allows users to query Hadoop data using SQL-like language

•Tabular metadata layer that overlays files, can interpret semi-structured data (e.g. JSON)

•Generates MapReduce code to return required data

•Extensible through SerDes and Storage Handlers

•JDBC and ODBC drivers for most platforms/tools

•Perfect for set-based access + batch ETL work
Apache Hive : SQL Metadata + Engine over Hadoop
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Data integration tools such as Oracle Data Integrator can load and process Hadoop data

•BI tools such as Oracle Business Intelligence 12c can report on Hadoop data

•Generally use MapReduce and Hive to access data

‣ODBC and JDBC access to Hive tabular data

‣Allows Hadoop unstructured/semi-structured

data on HDFS to be accessed like RDBMS
Hive Provides a SQL Interface for BI + ETL Tools
Access direct Hive or extract using ODI12c
for structured OBIEE dashboard analysis
What pages are people visiting?
Who is referring to us on Twitter?
What content has the most reach?
?
MapReduce is for losers
Hive is Slow
Hive is slow
very slow
too slow

for ad-hoc querying
But the future is fast
And pretty damn exciting
info@rittmanmead.com www.rittmanmead.com @rittmanmead 26
Hadoop 2.0 Processing Frameworks + Tools
info@rittmanmead.com www.rittmanmead.com @rittmanmead 27
•MapReduce’s great innovation was to break processing down into distributed jobs

•Jobs that have no functional dependency on each other, only upstream tasks

•Provides a framework that is infinitely scalable and very fault tolerant

•Hadoop handled job scheduling and resource management

‣All MapReduce code had to do was provide the “map” and “reduce” functions

‣Automatic distributed processing

‣Slow but extremely powerful
Hadoop 1.0 and MapReduce
info@rittmanmead.com www.rittmanmead.com @rittmanmead 28
•A typical Hive or Pig script compiles down into multiple MapReduce jobs

•Each job stages its intermediate results to disk

•Safe, but slow - write to disk, spin-up separate JVMs for each job
MapReduce - Scales By Writing Intermediate Results to Disk
SELECT
LOWER(hashtags.text),
COUNT(*) AS total_count
FROM (
SELECT * FROM tweets WHERE regexp_extract(created_at,"(2015)*",1) = "2015"
) tweets
LATERAL VIEW EXPLODE(entities.hashtags) t1 AS hashtags
GROUP BY LOWER(hashtags.text)
ORDER BY total_count DESC
LIMIT 15
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 5.34 sec HDFS Read: 10952994 HDFS Write: 5239 SUCCESS
Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 2.1 sec HDFS Read: 9983 HDFS Write: 164 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 440 msec
OK
1
2
info@rittmanmead.com www.rittmanmead.com @rittmanmead 29
•MapReduce 2 (MR2) splits the functionality of the JobTracker

by separating resource management and job scheduling/monitoring

•Introduces YARN (Yet Another Resource Manager)

•Permits other processing frameworks to MR

‣For example, Apache Spark

•Maintains backwards compatibility with MR1

•Introduced with CDH5+
MapReduce 2 and YARN
Node

Manager
Node

Manager
Node

Manager
Resource

Manager
Client
Client
info@rittmanmead.com www.rittmanmead.com @rittmanmead 30
•Runs on top of YARN, provides a faster execution engine than MapReduce for Hive, Pig etc

•Models processing as an entire data flow graph (DAG), rather than separate job steps

‣DAG (Directed Acyclic Graph) is a new programming style for distributed systems

‣Dataflow steps pass data between them as streams, rather than writing/reading from disk

•Supports in-memory computation, enables Hive on Tez (Stinger) and Pig on Tez

•Favoured In-memory / Hive v2 

route by Hortonworks
Apache Tez
InputData
TEZ DAG
Map()
Map()
Map()
Reduce()
OutputData
Reduce()
Reduce()
Reduce()
InputData
Map()
Map()
Reduce()
Reduce()
info@rittmanmead.com www.rittmanmead.com @rittmanmead 31
Tez Advantage - Drop-In Replacement for MR with Hive, Pig
set hive.execution.engine=mr
set hive.execution.engine=tez
4m 17s
2m 25s
info@rittmanmead.com www.rittmanmead.com @rittmanmead 33
•Cloudera’s answer to Hive query response time issues

•MPP SQL query engine running on Hadoop, bypasses MapReduce for
direct data access

•Mostly in-memory, but spills to disk if required

•Uses Hive metastore to access Hive table metadata

•Similar SQL dialect to Hive - not as rich though and no support for Hive
SerDes, storage handlers etc
Cloudera Impala - Fast, MPP-style Access to Hadoop Data
info@rittmanmead.com www.rittmanmead.com @rittmanmead 34
•A replacement for Hive, but uses Hive concepts and

data dictionary (metastore)

•MPP (Massively Parallel Processing) query engine

that runs within Hadoop

‣Uses same file formats, security,

resource management as Hadoop

•Processes queries in-memory

•Accesses standard HDFS file data

•Option to use Apache AVRO, RCFile,

LZO or Parquet (column-store)

•Designed for interactive, real-time

SQL-like access to Hadoop
How Impala Works
Impala
Hadoop
HDFS etc
BI Server
Presentation Svr
Cloudera Impala

ODBC Driver
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Multi-Node

Hadoop Cluster
info@rittmanmead.com www.rittmanmead.com @rittmanmead 35
•Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list

•Run SHOW TABLES Impala SQL command to view tables available

•Run COUNT(*) on main ACCESS_PER_POST table to see typical response time
Enabling Hive Tables for Impala
[oracle@bigdatalite ~]$ impala-shell
Starting Impala Shell without Kerberos authentication
[bigdatalite.localdomain:21000] > invalidate metadata;
Query: invalidate metadata
Fetched 0 row(s) in 2.18s
[bigdatalite.localdomain:21000] > show tables;
Query: show tables
+-----------------------------------+
| name |
+-----------------------------------+
| access_per_post |
| access_per_post_cat_author |
| … |
| posts |
|——————————————————————————————————-+
Fetched 45 row(s) in 0.15s
[bigdatalite.localdomain:21000] > select count(*) 

from access_per_post;
Query: select count(*) from access_per_post
+----------+
| count(*) |
+----------+
| 343 |
+----------+
Fetched 1 row(s) in 2.76s
info@rittmanmead.com www.rittmanmead.com @rittmanmead 36
•Significant improvement over Hive response time

•Now makes Hadoop suitable for ad-hoc querying
Significantly-Improved Ad-Hoc Query Response Time vs Hive
|
Logical Query Summary Stats: Elapsed time 2, Response time 1, Compilation time 0 (seconds)
Logical Query Summary Stats: Elapsed time 50, Response time 49, Compilation time 0 (seconds)
Simple Two-Table Join against Hive Data Only
Simple Two-Table Join against Impala Data Only
vs
info@rittmanmead.com www.rittmanmead.com @rittmanmead 37
•Beginners usually store data in HDFS using text file formats (CSV) but these have limitations

•Apache AVRO often used for general-purpose processing

‣Splitability, schema evolution, in-built metadata, support for block compression

•Parquet now commonly used with Impala due to column-orientated storage

‣Mirrors work in RDBMS world around column-store

‣Only return (project) the columns you require across a wide table
Apache Parquet - Column-Orientated Storage for Analytics
info@rittmanmead.com www.rittmanmead.com @rittmanmead 38
•But Parquet (and HDFS) have significant limitation for real-time analytics applications

‣Append-only orientation, focus on column-store 

makes streaming ingestion harder

•Cloudera Kudu aims to combine best of HDFS + HBase

‣Real-time analytics-optimised 

‣Supports updates to data

‣Fast ingestion of data

‣Accessed using SQL-style tables

and get/put/update/delete API
Cloudera Kudu - Combining Best of HBase and Column-Store
info@rittmanmead.com www.rittmanmead.com @rittmanmead 39
•Part of Oracle Big Data 4.0 (BDA-only)

‣Also requires Oracle Database 12c, Oracle Exadata Database Machine

•Extends Oracle Data Dictionary to cover Hive

•Extends Oracle SQL and SmartScan to Hadoop

•Extends Oracle Security Model over Hadoop

‣Fine-grained access control

‣Data redaction, data masking

‣Uses fast c-based readers where possible

(vs. Hive MapReduce generation)

‣Map Hadoop parallelism to Oracle PQ

‣Big Data SQL engine works on top of YARN

‣Like Spark, Tez, MR2
Oracle Big Data SQL
Exadata

Storage Servers
Hadoop

Cluster
Exadata Database

Server
Oracle Big

Data SQL
SQL Queries
SmartScan SmartScan
hold on…
“where we’re going

we don’t need roads”
“where we’re going

we don’t need roads”
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Apache Drill is another SQL-on-Hadoop project that focus on schema-free data discovery

•Inspired by Google Dremel, innovation is querying raw data with schema optional

•Automatically infers and detects schema from semi-structured datasets and NoSQL DBs

•Join across different silos of data e.g. JSON records, Hive tables and HBase database

•Aimed at different use-cases than Hive - 

low-latency queries, discovery 

(think Endeca vs OBIEE)
Introducing Apache Drill - “We Don’t Need No Roads”
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Most modern datasource formats embed their schema in the data (“schema-on-read”)

•Apache Drill makes these as easy to join to traditional datasets as “point me at the data”

•Cuts out unnecessary work in defining Hive schemas for data that’s self-describing

•Supports joining across files,

databases, NoSQL etc
Self-Describing Data - Parquet, AVRO, JSON etc
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Files can exist either on the local filesystem, or on HDFS

•Connection to directory or file defined in storage configuration

•Can work with CSV, TXT, TSV etc

•First row of file can provide schema (column names)
Apache Drill and Text Files
SELECT * FROM dfs.`/tmp/csv_with_header.csv2`;
+-------+------+------+------+
| name | num1 | num2 | num3 |
+-------+------+------+------+
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
| hello | 1 | 2 | 3 |
+-------+------+------+------+
7 rows selected (0.12 seconds)
SELECT * FROM dfs.`/tmp/csv_no_header.csv`;
+------------------------+
| columns |
+------------------------+
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
| ["hello","1","2","3"] |
+------------------------+
7 rows selected (0.112 seconds)
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•JSON (Javascript Object Notation) documents are
often used for data interchange

•Exports from Twitter and other consumer services

•Web service responses and other B2B interfaces

•A more lightweight form of XML that is “self-
describing”

•Handles evolving schemas, and optional attributes

•Drill treats each document as a row, and has features
to

•Flatten nested data (extract elements from arrays)

•Generate key/value pairs for loosely structured data
Apache Drill and JSON Documents
use dfs.iot;
show files;
select in_reply_to_user_id, text from `all_tweets.json`
limit 5;
+---------------------+------+
| in_reply_to_user_id | text |
+---------------------+------+
| null | BI Forum 2013 in Brighton has now sold-out |
| null | "Football has become a numbers game |
| null | Just bought Lyndsay Wise’s Book |
| null | An Oracle BI "Blast from the Past" |
| 14716125 | Dilbert on Agile Programming |
+---------------------+------+
5 rows selected (0.229 seconds)
select name, flatten(fillings) as f 

from dfs.users.`/donuts.json` 

where f.cal < 300;
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Drill can connect to Hive to make use of metastore (incl. multiple Hive metastores)

•NoSQL databases (HBase etc)

•Parquet files (native storage format - columnar + self describing)
Apache Drill and Hive, HBase, Parquet Sources etc
USE hbase;
SELECT * FROM students;
+-------------+-----------------------+-----------------------------------------------------+
| row_key | account | address |
+-------------+-----------------------+------------------------------------------------------+
| [B@e6d9eb7 | {"name":"QWxpY2U="} | {"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY="} |
| [B@2823a2b4 | {"name":"Qm9i"} | {"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w"} |
| [B@3b8eec02 | {"name":"RnJhbms="} | {"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA=="} |
| [B@242895da | {"name":"TWFyeQ=="} | {"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ=="} |
+-------------+-----------------------+----------------------------------------------------------------------+
SELECT firstname,lastname FROM 

hiveremote.`customers` limit 10;`

+------------+------------+
| firstname | lastname |
+------------+------------+
| Essie | Vaill |
| Cruz | Roudabush |
| Billie | Tinnes |
| Zackary | Mockus |
| Rosemarie | Fifield |
| Bernard | Laboy |
| Marianne | Earman |
+------------+------------+
SELECT * FROM dfs.`iot_demo/geodata/region.parquet`;
+--------------+--------------+-----------------------+
| R_REGIONKEY | R_NAME | R_COMMENT |
+--------------+--------------+-----------------------+
| 0 | AFRICA | lar deposits. blithe |
| 1 | AMERICA | hs use ironic, even |
| 2 | ASIA | ges. thinly even pin |
| 3 | EUROPE | ly final courts cajo |
| 4 | MIDDLE EAST | uickly special accou |
+--------------+--------------+-----------------------+
info@rittmanmead.com www.rittmanmead.com @rittmanmead
•Drill developed for real-time, ad-hoc data exploration with schema discovery on-the-fly
•Individual analysts exploring new datasets, leveraging corporate metadata/data to help

•Hive is more about large-scale, centrally curated set-based big data access

•Drill models conceptually as JSON, vs. Hive’s tabular approach

•Drill introspects schema from whatever it connects to, vs. formal modeling in Hive
Apache Drill vs. Apache Hive
Interactive Queries

(Data Discovery, Tableau/VA)
Reporting Queries

(Canned Reports, OBIEE)
ETL

(ODI, Scripting, Informatica)
Apache Drill Apache Hive
Interactive Queries
100ms - 3mins
Reporting Queries
3mins - 20mins
ETL & Batch Queries
20mins - hours
but…
what’s all this about “Spark”?
info@rittmanmead.com www.rittmanmead.com @rittmanmead 53
•Another DAG execution engine running on YARN

•More mature than TEZ, with richer API and more vendor support

•Uses concept of an RDD (Resilient Distributed Dataset)

‣RDDs like tables or Pig relations, but can be cached in-memory

‣Great for in-memory transformations, or iterative/cyclic processes

•Spark jobs comprise of a DAG of tasks operating on RDDs

•Access through Scala, Python or Java APIs

•Related projects include

‣Spark SQL

‣Spark Streaming
Apache Spark
info@rittmanmead.com www.rittmanmead.com @rittmanmead 54
•Native support for multiple languages 

with identical APIs

‣Python - prototyping, data wrangling

‣Scala - functional programming features

‣Java - lower-level, application integration

•Use of closures, iterations, and other 

common language constructs to minimize code

•Integrated support for distributed +

functional programming

•Unified API for batch and streaming
Rich Developer Support + Wide Developer Ecosystem
scala> val logfile = sc.textFile("logs/access_log")
14/05/12 21:18:59 INFO MemoryStore: ensureFreeSpace(77353) 

called with curMem=234759, maxMem=309225062
14/05/12 21:18:59 INFO MemoryStore: Block broadcast_2 

stored as values to memory (estimated size 75.5 KB, free 294.6 MB)
logfile: org.apache.spark.rdd.RDD[String] = 

MappedRDD[31] at textFile at <console>:15
scala> logfile.count()
14/05/12 21:19:06 INFO FileInputFormat: Total input paths to process : 1
14/05/12 21:19:06 INFO SparkContext: Starting job: count at <console>:1
...
14/05/12 21:19:06 INFO SparkContext: Job finished: 

count at <console>:18, took 0.192536694 s
res7: Long = 154563
scala> val logfile = sc.textFile("logs/access_log").cache
scala> val biapps11g = logfile.filter(line => line.contains("/biapps11g/"))
biapps11g: org.apache.spark.rdd.RDD[String] = FilteredRDD[34] at filter at <console>:17
scala> biapps11g.count()
...
14/05/12 21:28:28 INFO SparkContext: Job finished: count at <console>:20, took 0.387960876 s
res9: Long = 403
info@rittmanmead.com www.rittmanmead.com @rittmanmead 55
•Spark SQL, and Data Frames, allow RDDs in Spark to be processed using SQL queries

•Bring in and federate additional data from JDBC sources

•Load, read and save data in Hive, Parquet and other structured tabular formats
Spark SQL - Adding SQL Processing to Apache Spark
val accessLogsFilteredDF = accessLogs
.filter( r => ! r.agent.matches(".*(spider|robot|bot|slurp).*"))
.filter( r => ! r.endpoint.matches(".*(wp-content|wp-admin).*")).toDF()
.registerTempTable("accessLogsFiltered")
val topTenPostsLast24Hour = sqlContext.sql("SELECT p.POST_TITLE, p.POST_AUTHOR, COUNT(*) 

as total 

FROM accessLogsFiltered a 

JOIN posts p ON a.endpoint = p.POST_SLUG 

GROUP BY p.POST_TITLE, p.POST_AUTHOR 

ORDER BY total DESC LIMIT 10 ")
// Persist top ten table for this window to HDFS as parquet file
topTenPostsLast24Hour.save("/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet"

, "parquet", SaveMode.Overwrite)
info@rittmanmead.com www.rittmanmead.com @rittmanmead 56
Choosing the Appropriate SQL-on-Hadoop Engine
Large-Scale Batch Fast and Ad-hoc?
or…
info@rittmanmead.com www.rittmanmead.com @rittmanmead 57
Choosing the Appropriate SQL-on-Hadoop Engine
Centralised &

Ordered Metadata
Crazy, Complex and

Changing Schema?
or…
info@rittmanmead.com www.rittmanmead.com @rittmanmead 58
Choosing the Appropriate SQL-on-Hadoop Engine
Integrating with

Existing Corporate DW
Looking to Escape

Legacy DWs to Hadoop?
or…
Coming soon (*)
(*) possibly
Check-out these Developer VMs:
http://www.cloudera.com/documentation/enterprise/5-3-x/topics/
cloudera_quickstart_vm.html



http://hortonworks.com/products/sandbox/
http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-
bigdatalite-2104726.html
https://www.mapr.com/products/mapr-sandbox-hadoop/download-sandbox-drill
http://www.rittmanmead.com
info@rittmanmead.com www.rittmanmead.com @rittmanmead
SQL and Data Integration Futures on Hadoop :
A look into the future of analytics on Oracle BDA
Mark Rittman, CTO, Rittman Mead
Enkitec E4, Barcelona, April 2016

Contenu connexe

Tendances

KnowIT, semantic informatics knowledge base
KnowIT, semantic informatics knowledge baseKnowIT, semantic informatics knowledge base
KnowIT, semantic informatics knowledge base
Laurent Alquier
 
Dataware house Introduction By Quontra Solutions
Dataware house Introduction By Quontra SolutionsDataware house Introduction By Quontra Solutions
Dataware house Introduction By Quontra Solutions
Quontra Solutions
 

Tendances (20)

Using Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitUsing Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
 
Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...
 
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
 
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyOracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
 
Deploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudDeploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle Cloud
 
Data Integration and Data Warehousing for Cloud, Big Data and IoT: 
What’s Ne...
Data Integration and Data Warehousing for Cloud, Big Data and IoT: 
What’s Ne...Data Integration and Data Warehousing for Cloud, Big Data and IoT: 
What’s Ne...
Data Integration and Data Warehousing for Cloud, Big Data and IoT: 
What’s Ne...
 
Spark - Migration Story
Spark - Migration Story Spark - Migration Story
Spark - Migration Story
 
A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...
A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...
A Walk Through the Kimball ETL Subsystems with Oracle Data Integration - Coll...
 
Building a Data Lake on AWS
Building a Data Lake on AWSBuilding a Data Lake on AWS
Building a Data Lake on AWS
 
Big Data & Data Lakes Building Blocks
Big Data & Data Lakes Building BlocksBig Data & Data Lakes Building Blocks
Big Data & Data Lakes Building Blocks
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a Lakehouse
 
KnowIT, semantic informatics knowledge base
KnowIT, semantic informatics knowledge baseKnowIT, semantic informatics knowledge base
KnowIT, semantic informatics knowledge base
 
Owning Your Own (Data) Lake House
Owning Your Own (Data) Lake HouseOwning Your Own (Data) Lake House
Owning Your Own (Data) Lake House
 
Does it only have to be ML + AI?
Does it only have to be ML + AI?Does it only have to be ML + AI?
Does it only have to be ML + AI?
 
Dataware house Introduction By Quontra Solutions
Dataware house Introduction By Quontra SolutionsDataware house Introduction By Quontra Solutions
Dataware house Introduction By Quontra Solutions
 
How To Leverage OBIEE Within A Big Data Architecture
How To Leverage OBIEE Within A Big Data ArchitectureHow To Leverage OBIEE Within A Big Data Architecture
How To Leverage OBIEE Within A Big Data Architecture
 
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionEnterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
 
Talend Big Data Capabilities - 2014
Talend Big Data Capabilities - 2014Talend Big Data Capabilities - 2014
Talend Big Data Capabilities - 2014
 
Building a Big Data Solution
Building a Big Data SolutionBuilding a Big Data Solution
Building a Big Data Solution
 
Eugene Polonichko "Azure Data Lake: what is it? why is it? where is it?"
Eugene Polonichko "Azure Data Lake: what is it? why is it? where is it?"Eugene Polonichko "Azure Data Lake: what is it? why is it? where is it?"
Eugene Polonichko "Azure Data Lake: what is it? why is it? where is it?"
 

En vedette

Aws seminar-tokyo ken-final-publish
Aws seminar-tokyo ken-final-publishAws seminar-tokyo ken-final-publish
Aws seminar-tokyo ken-final-publish
awsadovantageseminar
 
Excelentes dibujos de julian beever
Excelentes dibujos de julian beeverExcelentes dibujos de julian beever
Excelentes dibujos de julian beever
Salvador Mata Sosa
 
lockheed martin 2006 Annual Report
lockheed martin 2006 Annual Reportlockheed martin 2006 Annual Report
lockheed martin 2006 Annual Report
finance6
 
Infographics_book_final
Infographics_book_finalInfographics_book_final
Infographics_book_final
Vuyokazi Sodo
 
1 a pengertian-dasar-statistika
1 a pengertian-dasar-statistika1 a pengertian-dasar-statistika
1 a pengertian-dasar-statistika
Salma Van Licht
 
Company Profile- CFMS.-1
Company Profile- CFMS.-1Company Profile- CFMS.-1
Company Profile- CFMS.-1
Shashi Singh
 
Латеральный маркетинг
Латеральный маркетингЛатеральный маркетинг
Латеральный маркетинг
Ermolina Lera
 

En vedette (20)

A New Day for Oracle Analytics
A New Day for Oracle AnalyticsA New Day for Oracle Analytics
A New Day for Oracle Analytics
 
Why a yorkie is right for me
Why a yorkie is right for meWhy a yorkie is right for me
Why a yorkie is right for me
 
Ppt ta deal
Ppt ta dealPpt ta deal
Ppt ta deal
 
LATIHAN BAB 2
LATIHAN BAB 2LATIHAN BAB 2
LATIHAN BAB 2
 
Aws seminar-tokyo ken-final-publish
Aws seminar-tokyo ken-final-publishAws seminar-tokyo ken-final-publish
Aws seminar-tokyo ken-final-publish
 
Excelentes dibujos de julian beever
Excelentes dibujos de julian beeverExcelentes dibujos de julian beever
Excelentes dibujos de julian beever
 
lockheed martin 2006 Annual Report
lockheed martin 2006 Annual Reportlockheed martin 2006 Annual Report
lockheed martin 2006 Annual Report
 
Thuyết trình nhóm 6
Thuyết trình nhóm 6Thuyết trình nhóm 6
Thuyết trình nhóm 6
 
Infographics_book_final
Infographics_book_finalInfographics_book_final
Infographics_book_final
 
Ashutosh rubber-pvt-ltd
Ashutosh rubber-pvt-ltdAshutosh rubber-pvt-ltd
Ashutosh rubber-pvt-ltd
 
Building A City
Building A CityBuilding A City
Building A City
 
Power point hbsc3103 faudzi
Power point hbsc3103 faudziPower point hbsc3103 faudzi
Power point hbsc3103 faudzi
 
Tm31
Tm31Tm31
Tm31
 
React Lightning Design System
React Lightning Design SystemReact Lightning Design System
React Lightning Design System
 
1 a pengertian-dasar-statistika
1 a pengertian-dasar-statistika1 a pengertian-dasar-statistika
1 a pengertian-dasar-statistika
 
LATIHAN BAB 7
LATIHAN BAB 7LATIHAN BAB 7
LATIHAN BAB 7
 
Company Profile- CFMS.-1
Company Profile- CFMS.-1Company Profile- CFMS.-1
Company Profile- CFMS.-1
 
Латеральный маркетинг
Латеральный маркетингЛатеральный маркетинг
Латеральный маркетинг
 
環境依存しないSalesforce組織の作り方
環境依存しないSalesforce組織の作り方環境依存しないSalesforce組織の作り方
環境依存しないSalesforce組織の作り方
 
Lorenzoysucazo
LorenzoysucazoLorenzoysucazo
Lorenzoysucazo
 

Similaire à Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :

Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
Andrew Brust
 
Big Data Strategy for the Relational World
Big Data Strategy for the Relational World Big Data Strategy for the Relational World
Big Data Strategy for the Relational World
Andrew Brust
 

Similaire à Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : (20)

IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
 
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
 
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
Microsoft's Big Play for Big Data- Visual Studio Live! NY 2012
 
Microsoft's Big Play for Big Data
Microsoft's Big Play for Big DataMicrosoft's Big Play for Big Data
Microsoft's Big Play for Big Data
 
Big Data Developers Moscow Meetup 1 - sql on hadoop
Big Data Developers Moscow Meetup 1  - sql on hadoopBig Data Developers Moscow Meetup 1  - sql on hadoop
Big Data Developers Moscow Meetup 1 - sql on hadoop
 
Advanced Analytics and Big Data (August 2014)
Advanced Analytics and Big Data (August 2014)Advanced Analytics and Big Data (August 2014)
Advanced Analytics and Big Data (August 2014)
 
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
 
Map reducecloudtech
Map reducecloudtechMap reducecloudtech
Map reducecloudtech
 
Big Data Processing
Big Data ProcessingBig Data Processing
Big Data Processing
 
Big Data and NoSQL for Database and BI Pros
Big Data and NoSQL for Database and BI ProsBig Data and NoSQL for Database and BI Pros
Big Data and NoSQL for Database and BI Pros
 
Apache Spark Fundamentals
Apache Spark FundamentalsApache Spark Fundamentals
Apache Spark Fundamentals
 
Technologies for Data Analytics Platform
Technologies for Data Analytics PlatformTechnologies for Data Analytics Platform
Technologies for Data Analytics Platform
 
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
 
The Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsThe Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data Platforms
 
Philly Code Camp 2013 Mark Kromer Big Data with SQL Server
Philly Code Camp 2013 Mark Kromer Big Data with SQL ServerPhilly Code Camp 2013 Mark Kromer Big Data with SQL Server
Philly Code Camp 2013 Mark Kromer Big Data with SQL Server
 
Architecting Your First Big Data Implementation
Architecting Your First Big Data ImplementationArchitecting Your First Big Data Implementation
Architecting Your First Big Data Implementation
 
Big Data Strategy for the Relational World
Big Data Strategy for the Relational World Big Data Strategy for the Relational World
Big Data Strategy for the Relational World
 
Big Data training
Big Data trainingBig Data training
Big Data training
 
Big Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and ZeppelinBig Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and Zeppelin
 

Plus de Mark Rittman

Plus de Mark Rittman (12)

Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
 
What is Big Data Discovery, and how it complements traditional business anal...
What is Big Data Discovery, and how it complements  traditional business anal...What is Big Data Discovery, and how it complements  traditional business anal...
What is Big Data Discovery, and how it complements traditional business anal...
 
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
 
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
 
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
 
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
 
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODIBIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
 
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI ProjectsOGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
 
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cUKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
 
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
 
Part 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11gPart 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11g
 
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cPart 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
 

Dernier

Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night StandCall Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
amitlee9823
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
amitlee9823
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
amitlee9823
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
amitlee9823
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
amitlee9823
 
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men 🔝mahisagar🔝 Esc...
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men  🔝mahisagar🔝   Esc...➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men  🔝mahisagar🔝   Esc...
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men 🔝mahisagar🔝 Esc...
amitlee9823
 
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
karishmasinghjnh
 

Dernier (20)

Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night StandCall Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Anomaly detection and data imputation within time series
Anomaly detection and data imputation within time seriesAnomaly detection and data imputation within time series
Anomaly detection and data imputation within time series
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
hybrid Seed Production In Chilli & Capsicum.pptx
hybrid Seed Production In Chilli & Capsicum.pptxhybrid Seed Production In Chilli & Capsicum.pptx
hybrid Seed Production In Chilli & Capsicum.pptx
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
Detecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning ApproachDetecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning Approach
 
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men 🔝mahisagar🔝 Esc...
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men  🔝mahisagar🔝   Esc...➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men  🔝mahisagar🔝   Esc...
➥🔝 7737669865 🔝▻ mahisagar Call-girls in Women Seeking Men 🔝mahisagar🔝 Esc...
 
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
👉 Amritsar Call Girl 👉📞 6367187148 👉📞 Just📲 Call Ruhi Call Girl Phone No Amri...
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 

Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :

  • 1. info@rittmanmead.com www.rittmanmead.com @rittmanmead SQL and Data Integration Futures on Hadoop : A look into the future of analytics on Oracle BDA Mark Rittman, CTO, Rittman Mead Enkitec E4, Barcelona, April 2016
  • 2. info@rittmanmead.com www.rittmanmead.com @rittmanmead 2 •Mark Rittman, Co-Founder of Rittman Mead ‣Oracle ACE Director, specialising in Oracle BI&DW ‣14 Years Experience with Oracle Technology ‣Regular columnist for Oracle Magazine •Author of two Oracle Press Oracle BI books ‣Oracle Business Intelligence Developers Guide ‣Oracle Exalytics Revealed ‣Writer for Rittman Mead Blog :
 http://www.rittmanmead.com/blog •Email : mark.rittman@rittmanmead.com •Twitter : @markrittman About the Speaker
  • 3. info@rittmanmead.com www.rittmanmead.com @rittmanmead 3 •Started back in 1997 on a bank Oracle DW project •Our tools were Oracle 7.3.4, SQL*Plus, PL/SQL 
 and shell scripts •Went on to use Oracle Developer/2000 and Designer/2000 •Our initial users queried the DW using SQL*Plus •And later on, we rolled-out Discoverer/2000 to everyone else •And life was fun… 15+ Years in Oracle BI and Data Warehousing
  • 4. info@rittmanmead.com www.rittmanmead.com @rittmanmead 4 •Over time, this data warehouse architecture developed •Added Oracle Warehouse Builder to 
 automate and model the DW build •Oracle 9i Application Server (yay!) 
 to deliver reports and web portals •Data Mining and OLAP in the database •Oracle 9i for in-database ETL (and RAC) •Data was typically loaded from 
 Oracle RBDMS and EBS •It was turtles Oracle all the way down… The Oracle-Centric DW Architecture
  • 6. info@rittmanmead.com www.rittmanmead.com @rittmanmead 6 •Many customers and organisations are now running initiatives around “big data” •Some are IT-led and are looking for cost-savings around data warehouse storage + ETL •Others are “skunkworks” projects in the marketing department that are now scaling-up •Projects now emerging from pilot exercises •And design patterns starting to emerge Many Organisations are Running Big Data Initiatives
  • 7. info@rittmanmead.com www.rittmanmead.com @rittmanmead Highly Scalable (and Affordable) Cluster Computing •Enterprise High-End RDBMSs such as Oracle can scale into the petabytes, using clustering ‣Sharded databases (e.g. Netezza) can scale further but with complexity / single workload trade-offs •Hadoop was designed from outside for massive horizontal scalability - using cheap hardware •Anticipates hardware failure and makes multiple copies of data as protection •More nodes you add, more stable it becomes •And at a fraction of the cost of traditional
 RDBMS platforms
  • 8. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Store and analyze huge volumes of structured and unstructured data •In the past, we had to throw away the detail •No need to define a data model during ingest •Supports multiple, flexible schemas •Separation of storage from compute engine •Allows multiple query engines and frameworks
 to work on the same raw datasets Store Everything Forever - And Process in Many Ways Hadoop Data Lake Webserver
 Log Files (txt) Social Media
 Logs (JSON) DB Archives
 (CSV) Sensor Data
 (XML) `Spatial & Graph
 (XML, txt) IoT Logs
 (JSON, txt) Chat Transcripts
 (Txt) DB Transactions
 (CSV, XML) Blogs, Articles
 (TXT, HTML) Raw Data Processed Data NoSQL Key-Value
 Store DB Tabular Data
 (Hive Tables) Aggregates
 (Impala Tables) NoSQL Document 
 Store DB
  • 9. info@rittmanmead.com www.rittmanmead.com @rittmanmead 9 •Typical implementation of Hadoop and big data in an analytic context is the “data lake” •Additional data storage platform with cheap storage, flexible schema support + compute •Data lands in the data lake or reservoir in raw form, then minimally processed •Data then accessed directly by “data scientists”, or processed further into DW In the Context of BI & Analytics : The Data Reservoir Data Transfer Data Access Data Factory Data Reservoir Business Intelligence Tools Hadoop Platform File Based Integration Stream Based Integration Data streams Discovery & Development Labs Safe & secure Discovery and Development environment Data sets and samples Models and programs Marketing / Sales Applications Models Machine Learning Segments Operational Data Transactions Customer Master ata Unstructured Data Voice + Chat Transcripts ETL Based Integration Raw Customer Data Data stored in the original format (usually files) such as SS7, ASN.1, JSON etc. Mapped Customer Data Data sets produced by mapping and transforming raw data
  • 10. info@rittmanmead.com www.rittmanmead.com @rittmanmead 10 •Oracle Engineered system for big data processing and analysis •Start with Oracle Big Data Appliance Starter Rack - expand up to 18 nodes per rack •Cluster racks together for horizontal scale-out using enterprise-quality infrastructure Oracle Big Data Appliance Starter Rack + Expansion • Cloudera CDH + Oracle software • 18 High-spec Hadoop Nodes with InfiniBand switches for internal Hadoop traffic, optimised for network throughput • 1 Cisco Management Switch • Single place for support for H/W + S/W
 Deployed on Oracle Big Data Appliance Oracle Big Data Appliance Starter Rack + Expansion • Cloudera CDH + Oracle software • 18 High-spec Hadoop Nodes with InfiniBand switches for internal Hadoop traffic, optimised for network throughput • 1 Cisco Management Switch • Single place for support for H/W + S/W
 Enriched 
 Customer Profile Modeling Scoring Infiniband
  • 11. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Programming model for processing large data sets in parallel on a cluster •Not specific to a particular language, but usually written in Java •Inspired by the map and reduce functions commonly used in functional programming ‣Map() performs filtering and sorting ‣Reduce() aggregates the output of mappers ‣and a Shuffle() step to redistribute output by keys •Resolved several complications of distributed computing: ‣Allows unlimited computations on unlimited data ‣Map and reduce functions can be easily distributed ‣Originated at Google; Hadoop was Yahoo’s open-source
 implementation of MapReduce, + two are synonymous MapReduce - The Original Big Data Query Framework Mapper Filter, Project Mapper Filter, Project Mapper Filter, Project Reducer Aggregate Reducer Aggregate Output
 One HDFS file per reducer,
 in a directory
  • 12. info@rittmanmead.com www.rittmanmead.com @rittmanmead 12 •Original developed at Facebook, now foundational within the Hadoop project •Allows users to query Hadoop data using SQL-like language •Tabular metadata layer that overlays files, can interpret semi-structured data (e.g. JSON) •Generates MapReduce code to return required data •Extensible through SerDes and Storage Handlers •JDBC and ODBC drivers for most platforms/tools •Perfect for set-based access + batch ETL work Apache Hive : SQL Metadata + Engine over Hadoop
  • 13. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Data integration tools such as Oracle Data Integrator can load and process Hadoop data •BI tools such as Oracle Business Intelligence 12c can report on Hadoop data •Generally use MapReduce and Hive to access data ‣ODBC and JDBC access to Hive tabular data ‣Allows Hadoop unstructured/semi-structured
 data on HDFS to be accessed like RDBMS Hive Provides a SQL Interface for BI + ETL Tools Access direct Hive or extract using ODI12c for structured OBIEE dashboard analysis What pages are people visiting? Who is referring to us on Twitter? What content has the most reach?
  • 14.
  • 15. ?
  • 16.
  • 18.
  • 23. But the future is fast
  • 24. And pretty damn exciting
  • 25.
  • 26. info@rittmanmead.com www.rittmanmead.com @rittmanmead 26 Hadoop 2.0 Processing Frameworks + Tools
  • 27. info@rittmanmead.com www.rittmanmead.com @rittmanmead 27 •MapReduce’s great innovation was to break processing down into distributed jobs •Jobs that have no functional dependency on each other, only upstream tasks •Provides a framework that is infinitely scalable and very fault tolerant •Hadoop handled job scheduling and resource management ‣All MapReduce code had to do was provide the “map” and “reduce” functions ‣Automatic distributed processing ‣Slow but extremely powerful Hadoop 1.0 and MapReduce
  • 28. info@rittmanmead.com www.rittmanmead.com @rittmanmead 28 •A typical Hive or Pig script compiles down into multiple MapReduce jobs •Each job stages its intermediate results to disk •Safe, but slow - write to disk, spin-up separate JVMs for each job MapReduce - Scales By Writing Intermediate Results to Disk SELECT LOWER(hashtags.text), COUNT(*) AS total_count FROM ( SELECT * FROM tweets WHERE regexp_extract(created_at,"(2015)*",1) = "2015" ) tweets LATERAL VIEW EXPLODE(entities.hashtags) t1 AS hashtags GROUP BY LOWER(hashtags.text) ORDER BY total_count DESC LIMIT 15 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 5.34 sec HDFS Read: 10952994 HDFS Write: 5239 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 2.1 sec HDFS Read: 9983 HDFS Write: 164 SUCCESS Total MapReduce CPU Time Spent: 7 seconds 440 msec OK 1 2
  • 29. info@rittmanmead.com www.rittmanmead.com @rittmanmead 29 •MapReduce 2 (MR2) splits the functionality of the JobTracker
 by separating resource management and job scheduling/monitoring •Introduces YARN (Yet Another Resource Manager) •Permits other processing frameworks to MR ‣For example, Apache Spark •Maintains backwards compatibility with MR1 •Introduced with CDH5+ MapReduce 2 and YARN Node
 Manager Node
 Manager Node
 Manager Resource
 Manager Client Client
  • 30. info@rittmanmead.com www.rittmanmead.com @rittmanmead 30 •Runs on top of YARN, provides a faster execution engine than MapReduce for Hive, Pig etc •Models processing as an entire data flow graph (DAG), rather than separate job steps ‣DAG (Directed Acyclic Graph) is a new programming style for distributed systems ‣Dataflow steps pass data between them as streams, rather than writing/reading from disk •Supports in-memory computation, enables Hive on Tez (Stinger) and Pig on Tez •Favoured In-memory / Hive v2 
 route by Hortonworks Apache Tez InputData TEZ DAG Map() Map() Map() Reduce() OutputData Reduce() Reduce() Reduce() InputData Map() Map() Reduce() Reduce()
  • 31. info@rittmanmead.com www.rittmanmead.com @rittmanmead 31 Tez Advantage - Drop-In Replacement for MR with Hive, Pig set hive.execution.engine=mr set hive.execution.engine=tez 4m 17s 2m 25s
  • 32.
  • 33. info@rittmanmead.com www.rittmanmead.com @rittmanmead 33 •Cloudera’s answer to Hive query response time issues •MPP SQL query engine running on Hadoop, bypasses MapReduce for direct data access •Mostly in-memory, but spills to disk if required •Uses Hive metastore to access Hive table metadata •Similar SQL dialect to Hive - not as rich though and no support for Hive SerDes, storage handlers etc Cloudera Impala - Fast, MPP-style Access to Hadoop Data
  • 34. info@rittmanmead.com www.rittmanmead.com @rittmanmead 34 •A replacement for Hive, but uses Hive concepts and
 data dictionary (metastore) •MPP (Massively Parallel Processing) query engine
 that runs within Hadoop ‣Uses same file formats, security,
 resource management as Hadoop •Processes queries in-memory •Accesses standard HDFS file data •Option to use Apache AVRO, RCFile,
 LZO or Parquet (column-store) •Designed for interactive, real-time
 SQL-like access to Hadoop How Impala Works Impala Hadoop HDFS etc BI Server Presentation Svr Cloudera Impala
 ODBC Driver Impala Hadoop HDFS etc Impala Hadoop HDFS etc Impala Hadoop HDFS etc Impala Hadoop HDFS etc Multi-Node
 Hadoop Cluster
  • 35. info@rittmanmead.com www.rittmanmead.com @rittmanmead 35 •Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list •Run SHOW TABLES Impala SQL command to view tables available •Run COUNT(*) on main ACCESS_PER_POST table to see typical response time Enabling Hive Tables for Impala [oracle@bigdatalite ~]$ impala-shell Starting Impala Shell without Kerberos authentication [bigdatalite.localdomain:21000] > invalidate metadata; Query: invalidate metadata Fetched 0 row(s) in 2.18s [bigdatalite.localdomain:21000] > show tables; Query: show tables +-----------------------------------+ | name | +-----------------------------------+ | access_per_post | | access_per_post_cat_author | | … | | posts | |——————————————————————————————————-+ Fetched 45 row(s) in 0.15s [bigdatalite.localdomain:21000] > select count(*) 
 from access_per_post; Query: select count(*) from access_per_post +----------+ | count(*) | +----------+ | 343 | +----------+ Fetched 1 row(s) in 2.76s
  • 36. info@rittmanmead.com www.rittmanmead.com @rittmanmead 36 •Significant improvement over Hive response time •Now makes Hadoop suitable for ad-hoc querying Significantly-Improved Ad-Hoc Query Response Time vs Hive | Logical Query Summary Stats: Elapsed time 2, Response time 1, Compilation time 0 (seconds) Logical Query Summary Stats: Elapsed time 50, Response time 49, Compilation time 0 (seconds) Simple Two-Table Join against Hive Data Only Simple Two-Table Join against Impala Data Only vs
  • 37. info@rittmanmead.com www.rittmanmead.com @rittmanmead 37 •Beginners usually store data in HDFS using text file formats (CSV) but these have limitations •Apache AVRO often used for general-purpose processing ‣Splitability, schema evolution, in-built metadata, support for block compression •Parquet now commonly used with Impala due to column-orientated storage ‣Mirrors work in RDBMS world around column-store ‣Only return (project) the columns you require across a wide table Apache Parquet - Column-Orientated Storage for Analytics
  • 38. info@rittmanmead.com www.rittmanmead.com @rittmanmead 38 •But Parquet (and HDFS) have significant limitation for real-time analytics applications ‣Append-only orientation, focus on column-store 
 makes streaming ingestion harder •Cloudera Kudu aims to combine best of HDFS + HBase ‣Real-time analytics-optimised ‣Supports updates to data ‣Fast ingestion of data ‣Accessed using SQL-style tables
 and get/put/update/delete API Cloudera Kudu - Combining Best of HBase and Column-Store
  • 39. info@rittmanmead.com www.rittmanmead.com @rittmanmead 39 •Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine •Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking ‣Uses fast c-based readers where possible
 (vs. Hive MapReduce generation) ‣Map Hadoop parallelism to Oracle PQ ‣Big Data SQL engine works on top of YARN ‣Like Spark, Tez, MR2 Oracle Big Data SQL Exadata
 Storage Servers Hadoop
 Cluster Exadata Database
 Server Oracle Big
 Data SQL SQL Queries SmartScan SmartScan
  • 41.
  • 42. “where we’re going
 we don’t need roads”
  • 43. “where we’re going
 we don’t need roads”
  • 44. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Apache Drill is another SQL-on-Hadoop project that focus on schema-free data discovery •Inspired by Google Dremel, innovation is querying raw data with schema optional •Automatically infers and detects schema from semi-structured datasets and NoSQL DBs •Join across different silos of data e.g. JSON records, Hive tables and HBase database •Aimed at different use-cases than Hive - 
 low-latency queries, discovery 
 (think Endeca vs OBIEE) Introducing Apache Drill - “We Don’t Need No Roads”
  • 45. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Most modern datasource formats embed their schema in the data (“schema-on-read”) •Apache Drill makes these as easy to join to traditional datasets as “point me at the data” •Cuts out unnecessary work in defining Hive schemas for data that’s self-describing •Supports joining across files,
 databases, NoSQL etc Self-Describing Data - Parquet, AVRO, JSON etc
  • 46. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Files can exist either on the local filesystem, or on HDFS •Connection to directory or file defined in storage configuration •Can work with CSV, TXT, TSV etc •First row of file can provide schema (column names) Apache Drill and Text Files SELECT * FROM dfs.`/tmp/csv_with_header.csv2`; +-------+------+------+------+ | name | num1 | num2 | num3 | +-------+------+------+------+ | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | | hello | 1 | 2 | 3 | +-------+------+------+------+ 7 rows selected (0.12 seconds) SELECT * FROM dfs.`/tmp/csv_no_header.csv`; +------------------------+ | columns | +------------------------+ | ["hello","1","2","3"] | | ["hello","1","2","3"] | | ["hello","1","2","3"] | | ["hello","1","2","3"] | | ["hello","1","2","3"] | | ["hello","1","2","3"] | | ["hello","1","2","3"] | +------------------------+ 7 rows selected (0.112 seconds)
  • 47. info@rittmanmead.com www.rittmanmead.com @rittmanmead •JSON (Javascript Object Notation) documents are often used for data interchange •Exports from Twitter and other consumer services •Web service responses and other B2B interfaces •A more lightweight form of XML that is “self- describing” •Handles evolving schemas, and optional attributes •Drill treats each document as a row, and has features to •Flatten nested data (extract elements from arrays) •Generate key/value pairs for loosely structured data Apache Drill and JSON Documents use dfs.iot; show files; select in_reply_to_user_id, text from `all_tweets.json` limit 5; +---------------------+------+ | in_reply_to_user_id | text | +---------------------+------+ | null | BI Forum 2013 in Brighton has now sold-out | | null | "Football has become a numbers game | | null | Just bought Lyndsay Wise’s Book | | null | An Oracle BI "Blast from the Past" | | 14716125 | Dilbert on Agile Programming | +---------------------+------+ 5 rows selected (0.229 seconds) select name, flatten(fillings) as f 
 from dfs.users.`/donuts.json` 
 where f.cal < 300;
  • 48. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Drill can connect to Hive to make use of metastore (incl. multiple Hive metastores) •NoSQL databases (HBase etc) •Parquet files (native storage format - columnar + self describing) Apache Drill and Hive, HBase, Parquet Sources etc USE hbase; SELECT * FROM students; +-------------+-----------------------+-----------------------------------------------------+ | row_key | account | address | +-------------+-----------------------+------------------------------------------------------+ | [B@e6d9eb7 | {"name":"QWxpY2U="} | {"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY="} | | [B@2823a2b4 | {"name":"Qm9i"} | {"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w"} | | [B@3b8eec02 | {"name":"RnJhbms="} | {"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA=="} | | [B@242895da | {"name":"TWFyeQ=="} | {"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ=="} | +-------------+-----------------------+----------------------------------------------------------------------+ SELECT firstname,lastname FROM 
 hiveremote.`customers` limit 10;`
 +------------+------------+ | firstname | lastname | +------------+------------+ | Essie | Vaill | | Cruz | Roudabush | | Billie | Tinnes | | Zackary | Mockus | | Rosemarie | Fifield | | Bernard | Laboy | | Marianne | Earman | +------------+------------+ SELECT * FROM dfs.`iot_demo/geodata/region.parquet`; +--------------+--------------+-----------------------+ | R_REGIONKEY | R_NAME | R_COMMENT | +--------------+--------------+-----------------------+ | 0 | AFRICA | lar deposits. blithe | | 1 | AMERICA | hs use ironic, even | | 2 | ASIA | ges. thinly even pin | | 3 | EUROPE | ly final courts cajo | | 4 | MIDDLE EAST | uickly special accou | +--------------+--------------+-----------------------+
  • 49. info@rittmanmead.com www.rittmanmead.com @rittmanmead •Drill developed for real-time, ad-hoc data exploration with schema discovery on-the-fly •Individual analysts exploring new datasets, leveraging corporate metadata/data to help •Hive is more about large-scale, centrally curated set-based big data access •Drill models conceptually as JSON, vs. Hive’s tabular approach •Drill introspects schema from whatever it connects to, vs. formal modeling in Hive Apache Drill vs. Apache Hive Interactive Queries
 (Data Discovery, Tableau/VA) Reporting Queries
 (Canned Reports, OBIEE) ETL
 (ODI, Scripting, Informatica) Apache Drill Apache Hive Interactive Queries 100ms - 3mins Reporting Queries 3mins - 20mins ETL & Batch Queries 20mins - hours
  • 51. what’s all this about “Spark”?
  • 52.
  • 53. info@rittmanmead.com www.rittmanmead.com @rittmanmead 53 •Another DAG execution engine running on YARN •More mature than TEZ, with richer API and more vendor support •Uses concept of an RDD (Resilient Distributed Dataset) ‣RDDs like tables or Pig relations, but can be cached in-memory ‣Great for in-memory transformations, or iterative/cyclic processes •Spark jobs comprise of a DAG of tasks operating on RDDs •Access through Scala, Python or Java APIs •Related projects include ‣Spark SQL ‣Spark Streaming Apache Spark
  • 54. info@rittmanmead.com www.rittmanmead.com @rittmanmead 54 •Native support for multiple languages 
 with identical APIs ‣Python - prototyping, data wrangling ‣Scala - functional programming features ‣Java - lower-level, application integration •Use of closures, iterations, and other 
 common language constructs to minimize code •Integrated support for distributed +
 functional programming •Unified API for batch and streaming Rich Developer Support + Wide Developer Ecosystem scala> val logfile = sc.textFile("logs/access_log") 14/05/12 21:18:59 INFO MemoryStore: ensureFreeSpace(77353) 
 called with curMem=234759, maxMem=309225062 14/05/12 21:18:59 INFO MemoryStore: Block broadcast_2 
 stored as values to memory (estimated size 75.5 KB, free 294.6 MB) logfile: org.apache.spark.rdd.RDD[String] = 
 MappedRDD[31] at textFile at <console>:15 scala> logfile.count() 14/05/12 21:19:06 INFO FileInputFormat: Total input paths to process : 1 14/05/12 21:19:06 INFO SparkContext: Starting job: count at <console>:1 ... 14/05/12 21:19:06 INFO SparkContext: Job finished: 
 count at <console>:18, took 0.192536694 s res7: Long = 154563 scala> val logfile = sc.textFile("logs/access_log").cache scala> val biapps11g = logfile.filter(line => line.contains("/biapps11g/")) biapps11g: org.apache.spark.rdd.RDD[String] = FilteredRDD[34] at filter at <console>:17 scala> biapps11g.count() ... 14/05/12 21:28:28 INFO SparkContext: Job finished: count at <console>:20, took 0.387960876 s res9: Long = 403
  • 55. info@rittmanmead.com www.rittmanmead.com @rittmanmead 55 •Spark SQL, and Data Frames, allow RDDs in Spark to be processed using SQL queries •Bring in and federate additional data from JDBC sources •Load, read and save data in Hive, Parquet and other structured tabular formats Spark SQL - Adding SQL Processing to Apache Spark val accessLogsFilteredDF = accessLogs .filter( r => ! r.agent.matches(".*(spider|robot|bot|slurp).*")) .filter( r => ! r.endpoint.matches(".*(wp-content|wp-admin).*")).toDF() .registerTempTable("accessLogsFiltered") val topTenPostsLast24Hour = sqlContext.sql("SELECT p.POST_TITLE, p.POST_AUTHOR, COUNT(*) 
 as total 
 FROM accessLogsFiltered a 
 JOIN posts p ON a.endpoint = p.POST_SLUG 
 GROUP BY p.POST_TITLE, p.POST_AUTHOR 
 ORDER BY total DESC LIMIT 10 ") // Persist top ten table for this window to HDFS as parquet file topTenPostsLast24Hour.save("/user/oracle/rm_logs_batch_output/topTenPostsLast24Hour.parquet"
 , "parquet", SaveMode.Overwrite)
  • 56. info@rittmanmead.com www.rittmanmead.com @rittmanmead 56 Choosing the Appropriate SQL-on-Hadoop Engine Large-Scale Batch Fast and Ad-hoc? or…
  • 57. info@rittmanmead.com www.rittmanmead.com @rittmanmead 57 Choosing the Appropriate SQL-on-Hadoop Engine Centralised &
 Ordered Metadata Crazy, Complex and
 Changing Schema? or…
  • 58. info@rittmanmead.com www.rittmanmead.com @rittmanmead 58 Choosing the Appropriate SQL-on-Hadoop Engine Integrating with
 Existing Corporate DW Looking to Escape
 Legacy DWs to Hadoop? or…
  • 60.
  • 62. Check-out these Developer VMs: http://www.cloudera.com/documentation/enterprise/5-3-x/topics/ cloudera_quickstart_vm.html
 
 http://hortonworks.com/products/sandbox/ http://www.oracle.com/technetwork/database/bigdata-appliance/oracle- bigdatalite-2104726.html https://www.mapr.com/products/mapr-sandbox-hadoop/download-sandbox-drill
  • 64. info@rittmanmead.com www.rittmanmead.com @rittmanmead SQL and Data Integration Futures on Hadoop : A look into the future of analytics on Oracle BDA Mark Rittman, CTO, Rittman Mead Enkitec E4, Barcelona, April 2016