SlideShare une entreprise Scribd logo
1  sur  21
Hive & HBase For
Transaction Processing
Page 1
Alan Gates
@alanfgates
Agenda
Page 2Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, HBase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
Agenda
Page 3Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
A Brief History of Hive
Page 4Hive & HBase For Transaction Processing
• Initial goal was to make it easy to execute MapReduce using a familiar
language: SQL
– Most queries took minutes or hours
– Primarily used for batch ETL jobs
• Since 0.11 much has been done to support interactive and ad hoc queries
– Many new features focused on improving performance: ORC and Parquet, Tez and
Spark, vectorization
– As of Hive 0.14 (November 2014) TPC-DS query 3 (star-join, group, order, limit) using
ORC, Tez, and vectorization finishes in 9s for 200GB scale and 32s for 30TB scale.
– Still have ~2-5 second minimum for all queries
• Ongoing performance work with goal of reaching sub-second response time
– Continued investment in vectorization
– LLAP
– Using Apache HBase for metastore
LLAP = Live Long And Process
LLAP: Why?
Page 5Hive & HBase For Transaction Processing
• It is hard to be fast and flexible in Tez
– When SQL session starts Tez AM spun up (first query cost)
– For subsequent queries Tez containers can be
– pre-allocated – fast but not flexible
– allocated and released for each query – flexible but start up cost for every query
• No caching of data between queries
– Even if data is in OS cache much of IO cost is deserialization/vector marshaling
which is not shared
LLAP: What
Page 6Hive & HBase For Transaction Processing
• LLAP is a node resident daemon process
– Low latency by reducing setup cost
– Multi-threaded engine that runs smaller tasks for query
including reads, filter and some joins
– Use regular Tez tasks for larger shuffle and other operators
• LLAP has In-memory columnar data cache
– High throughput IO using Async IO Elevator with dedicated
thread and core per disk
– Low latency by providing data from in-memory (off heap)
cache instead of going to HDFS
– Store data in columnar format for vectorization irrespective
of underlying file type
– Security enforced across queries and users
• Uses YARN for resource management
Node
LLAP Process
Query
Fragment
LLAP In-
Memory
columnar
cache
LLAP
process
running a
task for a
query
HDFS
LLAP: What
Page 7Hive & HBase For Transaction Processing
Node
LLAP
Process
HDFS
Query
Fragm
ent
LLAP In-Memory
columnar cache
LLAP process
running read task
for a query
LLAP process runs on multiple nodes,
accelerating Tez tasks
Node
Hive
Query
Node NodeNode Node
LLAP LLAP LLAP LLAP
LLAP: Is and Is Not
Page 8Hive & HBase For Transaction Processing
• It is not MPP
– Data not shuffled between LLAP nodes (except in limited cases)
• It is not a replacement for Tez or Spark
– Configured engine still used to launch tasks for post-shuffle operations (e.g. hash
joins, distributed aggregations, etc.)
• It is not required, users can still use Hive without installing LLAP
demons
• It is a Map server, or a set of standing map tasks
• It is currently under development on the llap branch
• Hope to merge to master and do alpha release next month
HBase Metastore: Why?
Page 9Hive & HBase For Transaction Processing
HBase Metastore: Why?
Page 10Hive & HBase For Transaction Processing
BUCKETING_COLS
SD_ID BIGINT(20)
BUCKET_COL_NAME VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
CDS
CD_ID BIGINT(20)
Indexes
COLUMNS_V2
CD_ID BIGINT(20)
COMMENT VARCHAR(256)
COLUMN_NAME VARCHAR(128)
TYPE_NAME VARCHAR(4000)
INTEGER_IDX INT(11)
Indexes
DATABASE_PARAMS
DB_ID BIGINT(20)
PARAM_KEY VARCHAR(180)
PARAM_VALUE VARCHAR(4000)
Indexes
DBS
DB_ID BIGINT(20)
DESC VARCHAR(4000)
DB_LOCATION_URI VARCHAR(4000)
NAME VARCHAR(128)
OWNER_NAME VARCHAR(128)
OWNER_TYPE VARCHAR(10)
Indexes
DB_PRIVS
DB_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
DB_PRIV VARCHAR(128)
Indexes
GLOBAL_PRIVS
USER_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
USER_PRIV VARCHAR(128)
Indexes
IDXS
INDEX_ID BIGINT(20)
CREATE_TIME INT(11)
DEFERRED_REBUILD BIT(1)
INDEX_HANDLER_CLASS VARCHAR(4000)
INDEX_NAME VARCHAR(128)
INDEX_TBL_ID BIGINT(20)
LAST_ACCESS_TIME INT(11)
ORIG_TBL_ID BIGINT(20)
SD_ID BIGINT(20)
Indexes
INDEX_PARAMS
INDEX_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
NUCLEUS_TABLES
CLASS_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
TYPE VARCHAR(4)
OWNER VARCHAR(2)
VERSION VARCHAR(20)
INTERFACE_NAME VARCHAR(255)
Indexes
PARTITIONS
PART_ID BIGINT(20)
CREATE_TIME INT(11)
LAST_ACCESS_TIME INT(11)
PART_NAME VARCHAR(767)
SD_ID BIGINT(20)
TBL_ID BIGINT(20)
LINK_TARGET_ID BIGINT(20)
Indexes
PARTITION_EVENTS
PART_NAME_ID BIGINT(20)
DB_NAME VARCHAR(128)
EVENT_TIME BIGINT(20)
EVENT_TYPE INT(11)
PARTITION_NAME VARCHAR(767)
TBL_NAME VARCHAR(128)
Indexes
PARTITION_KEYS
TBL_ID BIGINT(20)
PKEY_COMMENT VARCHAR(4000)
PKEY_NAME VARCHAR(128)
PKEY_TYPE VARCHAR(767)
INTEGER_IDX INT(11)
Indexes
PARTITION_KEY_VALS
PART_ID BIGINT(20)
PART_KEY_VAL VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
PARTITION_PARAMS
PART_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
PART_COL_PRIVS
PART_COLUMN_GRANT_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PART_ID BIGINT(20)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
PART_COL_PRIV VARCHAR(128)
Indexes
PART_PRIVS
PART_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PART_ID BIGINT(20)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
PART_PRIV VARCHAR(128)
Indexes
ROLES
ROLE_ID BIGINT(20)
CREATE_TIME INT(11)
OWNER_NAME VARCHAR(128)
ROLE_NAME VARCHAR(128)
Indexes
ROLE_MAP
ROLE_GRANT_ID BIGINT(20)
ADD_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
ROLE_ID BIGINT(20)
Indexes
SDS
SD_ID BIGINT(20)
CD_ID BIGINT(20)
INPUT_FORMAT VARCHAR(4000)
IS_COMPRESSED BIT(1)
IS_STOREDASSUBDIRECTORIES BIT(1)
LOCATION VARCHAR(4000)
NUM_BUCKETS INT(11)
OUTPUT_FORMAT VARCHAR(4000)
SERDE_ID BIGINT(20)
Indexes
SD_PARAMS
SD_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
SEQUENCE_TABLE
SEQUENCE_NAME VARCHAR(255)
NEXT_VAL BIGINT(20)
Indexes
SERDES
SERDE_ID BIGINT(20)
NAME VARCHAR(128)
SLIB VARCHAR(4000)
Indexes
SERDE_PARAMS
SERDE_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
SKEWED_COL_NAMES
SD_ID BIGINT(20)
SKEWED_COL_NAME VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
SKEWED_COL_VALUE_LOC_MAP
SD_ID BIGINT(20)
STRING_LIST_ID_KID BIGINT(20)
LOCATION VARCHAR(4000)
Indexes
SKEWED_STRING_LIST
STRING_LIST_ID BIGINT(20)
Indexes
SKEWED_STRING_LIST_VALUES
STRING_LIST_ID BIGINT(20)
STRING_LIST_VALUE VARCHAR(256)
INTEGER_IDX INT(11)
Indexes
SKEWED_VALUES
SD_ID_OID BIGINT(20)
STRING_LIST_ID_EID BIGINT(20)
INTEGER_IDX INT(11)
Indexes
SORT_COLS
SD_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
ORDER INT(11)
INTEGER_IDX INT(11)
Indexes
TABLE_PARAMS
TBL_ID BIGINT(20)
PARAM_KEY VARCHAR(256)
PARAM_VALUE VARCHAR(4000)
Indexes
TBLS
TBL_ID BIGINT(20)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
LAST_ACCESS_TIME INT(11)
OWNER VARCHAR(767)
RETENTION INT(11)
SD_ID BIGINT(20)
TBL_NAME VARCHAR(128)
TBL_TYPE VARCHAR(128)
VIEW_EXPANDED_TEXT MEDIUMTEXT
VIEW_ORIGINAL_TEXT MEDIUMTEXT
LINK_TARGET_ID BIGINT(20)
Indexes
TBL_COL_PRIVS
TBL_COLUMN_GRANT_ID BIGINT(20)
COLUMN_NAME VARCHAR(128)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
TBL_COL_PRIV VARCHAR(128)
TBL_ID BIGINT(20)
Indexes
TBL_PRIVS
TBL_GRANT_ID BIGINT(20)
CREATE_TIME INT(11)
GRANT_OPTION SMALLINT(6)
GRANTOR VARCHAR(128)
GRANTOR_TYPE VARCHAR(128)
PRINCIPAL_NAME VARCHAR(128)
PRINCIPAL_TYPE VARCHAR(128)
TBL_PRIV VARCHAR(128)
TBL_ID BIGINT(20)
Indexes
TAB_COL_STATS
CS_ID BIGINT(20)
DB_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
COLUMN_NAME VARCHAR(128)
COLUMN_TYPE VARCHAR(128)
TBL_ID BIGINT(20)
LONG_LOW_VALUE BIGINT(20)
LONG_HIGH_VALUE BIGINT(20)
DOUBLE_HIGH_VALUE DOUBLE(53,4)
DOUBLE_LOW_VALUE DOUBLE(53,4)
BIG_DECIMAL_LOW_VALUE VARCHAR(4000)
BIG_DECIMAL_HIGH_VALUE VARCHAR(4000)
NUM_NULLS BIGINT(20)
NUM_DISTINCTS BIGINT(20)
AVG_COL_LEN DOUBLE(53,4)
MAX_COL_LEN BIGINT(20)
NUM_TRUES BIGINT(20)
NUM_FALSES BIGINT(20)
LAST_ANALYZED BIGINT(20)
Indexes
PART_COL_STATS
CS_ID BIGINT(20)
DB_NAME VARCHAR(128)
TABLE_NAME VARCHAR(128)
PARTITION_NAME VARCHAR(767)
COLUMN_NAME VARCHAR(128)
COLUMN_TYPE VARCHAR(128)
PART_ID BIGINT(20)
LONG_LOW_VALUE BIGINT(20)
LONG_HIGH_VALUE BIGINT(20)
DOUBLE_HIGH_VALUE DOUBLE(53,4)
DOUBLE_LOW_VALUE DOUBLE(53,4)
BIG_DECIMAL_LOW_VALUE VARCHAR(4000)
BIG_DECIMAL_HIGH_VALUE VARCHAR(4000)
NUM_NULLS BIGINT(20)
NUM_DISTINCTS BIGINT(20)
AVG_COL_LEN DOUBLE(53,4)
MAX_COL_LEN BIGINT(20)
NUM_TRUES BIGINT(20)
NUM_FALSES BIGINT(20)
LAST_ANALYZED BIGINT(20)
Indexes
TYPES
TYPES_ID BIGINT(20)
TYPE_NAME VARCHAR(128)
TYPE1 VARCHAR(767)
TYPE2 VARCHAR(767)
Indexes
TYPE_FIELDS
TYPE_NAME BIGINT(20)
COMMENT VARCHAR(256)
FIELD_NAME VARCHAR(128)
FIELD_TYPE VARCHAR(767)
INTEGER_IDX INT(11)
Indexes
MASTER_KEYS
KEY_ID INT
MASTER_KEY VARCHAR(767)
Indexes
DELEGATION_TOKENS
TOKEN_IDENT VARCHAR(767)
TOKEN VARCHAR(767)
Indexes
VERSION
VER_ID BIGINT
SCHEMA_VERSION VARCHAR(127)
VERSION_COMMENT VARCHAR(255)
Indexes
FUNCS
FUNC_ID BIGINT(20)
CLASS_NAME VARCHAR(4000)
CREATE_TIME INT(11)
DB_ID BIGINT(20)
FUNC_NAME VARCHAR(128)
FUNC_TYPE INT(11)
OWNER_NAME VARCHAR(128)
OWNER_TYPE VARCHAR(10)
Indexes
FUNC_RU
FUNC_ID BIGINT(20)
RESOURCE_TYPE INT(11)
RESOURCE_URI VARCHAR(4000)
INTEGER_IDX INT(11)
Indexes
HBase Metastore: Why?
Page 11Hive & HBase For Transaction Processing
> 700 metastore queries to plan
TPC-DS query 27!!!
HBase Metastore: Why?
Page 12Hive & HBase For Transaction Processing
• Object Relational Modeling is an impedance mismatch
• The need to work across different DBs limits tuning opportunities
• No caching of catalog objects or stats in HiveServer2 or Hive metastore
• Hadoop nodes cannot contact RDBMS directly due to scale issues
– Forces all planning to be done up front
– Limits caching opportunities
• Solution: use HBase
– Can store object directly, no need to normalize
– Already scales, performs, etc.
– Can store additional data not stored today due to RDBMS capacity limitations
– Can access the metadata from the cluster (e.g. LLAP, Tez AM)
But...
Page 13Hive & HBase For Transaction Processing
• HBase does not have transactions –
metastore needs them
– Tephra, Omid 2 (Yahoo), others working on this
• HBase is hard to administer and install
– Yes, we will need to improve this
– We will also need embedded option for test/POC
setups to keep HBase from becoming barrier to
adoption
• Basically any work we need to do to HBase
for this is good since it benefits all HBase
users
HBase Metastore: How
Page 14Hive & HBase For Transaction Processing
• HBaseStore, a new implementation of RawStore that stores data in
HBase
• Not default, users still free to use RDBMS
• Less than 10 tables in HBase
– DBS, TBLS, PARTITIONS, ... – basically one for each object type
– Common partition data factored out to significantly reduce size
• Layout highly optimized for SELECT and DML queries, longer
operations moved into DDL (e.g. grant)
• Extensive caching
– Of data catalog objects for length of a query
– Of aggregated stats across queries and users
• On going work in hbase-metastore branch
• Will alpha release with llap soon
Agenda
Page 15Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
Apache Phoenix: Putting SQL Back in NoSQL
Page 16Hive & HBase For Transaction Processing
• SQL layer on top of HBase
• Originally oriented toward transaction processing
• Moving to add more analytics type operators
– Adding multiple join implementations
– Requests for OLAP functions (PHOENIX-154)
• Working on adding transactions (PHOENIX-1674)
• Moving to Apache Calcite for optimization (PHOENIX-1488)
Agenda
Page 17Hive & HBase For Transaction Processing
• Our goal
– Combine Apache Hive, HBase, Phoenix, and Calcite to build a single data store
that can be used for analytics and transaction processing
• But before we get to that we need to consider
– Some things happening in Hive
– Some things happening in Phoenix
What If?
Page 18Hive & HBase For Transaction Processing
• We could share one O/JDBC driver?
• We could share one SQL dialect?
• Phoenix could leverage extensive analytics
functionality in Hive without re-inventing it
• Users could access their transactional and
analytics data in single SQL operations?
How?
Page 19Hive & HBase For Transaction Processing
• Insight #1: LLAP is a storage plus operations
server for Hive; we can swap it out for other
implementations
• Insight #2: Tez and Spark can do post-shuffle
operations (hash join, etc.) with LLAP or HBase
• Insight #3: Calcite (used by both Hive and
Phoenix) is built specifically to integrate
disparate data storage systems
Vision
Page 20Hive & HBase For Transaction Processing
• User picks storage location for table in create
table (LLAP or HBase)
• Transactions more efficient in HBase tables but
work in both
• Analytics more efficient in LLAP tables but work
in both
• Queries that require shuffle use Tez or Spark for
post shuffle operators
HDFS
JDBC Server
Node Node
HBase LLAP
Query
Query
Query
Calcite
used for
planning
Phoenix
used for
execution
Hurdles
Page 21Hive & HBase For Transaction Processing
• Need to integrate types/data representation
• Need to integrate transaction management
• Work to do in Calcite to optimize transactional queries well

Contenu connexe

Tendances

Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseUsing Spark Streaming and NiFi for the next generation of ETL in the enterprise
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
DataWorks Summit
 
Breathing New Life into Apache Oozie with Apache Ambari Workflow Manager
Breathing New Life into Apache Oozie with Apache Ambari Workflow ManagerBreathing New Life into Apache Oozie with Apache Ambari Workflow Manager
Breathing New Life into Apache Oozie with Apache Ambari Workflow Manager
DataWorks Summit
 
Hive acid-updates-summit-sjc-2014
Hive acid-updates-summit-sjc-2014Hive acid-updates-summit-sjc-2014
Hive acid-updates-summit-sjc-2014
alanfgates
 
Designing High Performance ETL for Data Warehouse
Designing High Performance ETL for Data WarehouseDesigning High Performance ETL for Data Warehouse
Designing High Performance ETL for Data Warehouse
Marcel Franke
 

Tendances (20)

How to use Hadoop for operational and transactional purposes by RODRIGO MERI...
 How to use Hadoop for operational and transactional purposes by RODRIGO MERI... How to use Hadoop for operational and transactional purposes by RODRIGO MERI...
How to use Hadoop for operational and transactional purposes by RODRIGO MERI...
 
Hive Does ACID
Hive Does ACIDHive Does ACID
Hive Does ACID
 
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseUsing Spark Streaming and NiFi for the next generation of ETL in the enterprise
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
 
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming...
Apache Phoenix with Actor Model (Akka.io)  for real-time Big Data Programming...Apache Phoenix with Actor Model (Akka.io)  for real-time Big Data Programming...
Apache Phoenix with Actor Model (Akka.io) for real-time Big Data Programming...
 
ApacheCon 2020 - Flink SQL in 2020: Time to show off!
ApacheCon 2020 - Flink SQL in 2020: Time to show off!ApacheCon 2020 - Flink SQL in 2020: Time to show off!
ApacheCon 2020 - Flink SQL in 2020: Time to show off!
 
HBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the ArtHBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the Art
 
From Device to Data Center to Insights
From Device to Data Center to InsightsFrom Device to Data Center to Insights
From Device to Data Center to Insights
 
The Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBaseThe Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBase
 
Breathing New Life into Apache Oozie with Apache Ambari Workflow Manager
Breathing New Life into Apache Oozie with Apache Ambari Workflow ManagerBreathing New Life into Apache Oozie with Apache Ambari Workflow Manager
Breathing New Life into Apache Oozie with Apache Ambari Workflow Manager
 
Apache Phoenix Query Server PhoenixCon2016
Apache Phoenix Query Server PhoenixCon2016Apache Phoenix Query Server PhoenixCon2016
Apache Phoenix Query Server PhoenixCon2016
 
You Can't Search Without Data
You Can't Search Without DataYou Can't Search Without Data
You Can't Search Without Data
 
A TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with PrestoA TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with Presto
 
Hive acid-updates-summit-sjc-2014
Hive acid-updates-summit-sjc-2014Hive acid-updates-summit-sjc-2014
Hive acid-updates-summit-sjc-2014
 
Designing High Performance ETL for Data Warehouse
Designing High Performance ETL for Data WarehouseDesigning High Performance ETL for Data Warehouse
Designing High Performance ETL for Data Warehouse
 
What will be new in Apache NiFi 1.2.0
What will be new in Apache NiFi 1.2.0What will be new in Apache NiFi 1.2.0
What will be new in Apache NiFi 1.2.0
 
Apache Hive on ACID
Apache Hive on ACIDApache Hive on ACID
Apache Hive on ACID
 
Hive acid-updates-strata-sjc-feb-2015
Hive acid-updates-strata-sjc-feb-2015Hive acid-updates-strata-sjc-feb-2015
Hive acid-updates-strata-sjc-feb-2015
 
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBaseApache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
 
GNW03: Stream Processing with Apache Kafka by Gwen Shapira
GNW03: Stream Processing with Apache Kafka by Gwen ShapiraGNW03: Stream Processing with Apache Kafka by Gwen Shapira
GNW03: Stream Processing with Apache Kafka by Gwen Shapira
 
Building a Large Scale SEO/SEM Application with Apache Solr
Building a Large Scale SEO/SEM Application with Apache SolrBuilding a Large Scale SEO/SEM Application with Apache Solr
Building a Large Scale SEO/SEM Application with Apache Solr
 

Similaire à Hive & HBase For Transaction Processing

Similaire à Hive & HBase For Transaction Processing (20)

Hive & HBase For Transaction Processing
Hive & HBase For Transaction ProcessingHive & HBase For Transaction Processing
Hive & HBase For Transaction Processing
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the CloudSpeed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
 
LLAP: Building Cloud First BI
LLAP: Building Cloud First BILLAP: Building Cloud First BI
LLAP: Building Cloud First BI
 
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和SparkEtu Solution Day 2014 Track-D: 掌握Impala和Spark
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
Stinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of HortonworksStinger.Next by Alan Gates of Hortonworks
Stinger.Next by Alan Gates of Hortonworks
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
 
LLAP: long-lived execution in Hive
LLAP: long-lived execution in HiveLLAP: long-lived execution in Hive
LLAP: long-lived execution in Hive
 
Apache Hudi: The Path Forward
Apache Hudi: The Path ForwardApache Hudi: The Path Forward
Apache Hudi: The Path Forward
 
Hive acid and_2.x new_features
Hive acid and_2.x new_featuresHive acid and_2.x new_features
Hive acid and_2.x new_features
 
What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?What is New in Apache Hive 3.0?
What is New in Apache Hive 3.0?
 
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019Hive 3 New Horizons DataWorks Summit Melbourne February 2019
Hive 3 New Horizons DataWorks Summit Melbourne February 2019
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
 
Apache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, ScaleApache Hive 2.0: SQL, Speed, Scale
Apache Hive 2.0: SQL, Speed, Scale
 
What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?What is new in Apache Hive 3.0?
What is new in Apache Hive 3.0?
 
Sub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scaleSub-second-sql-on-hadoop-at-scale
Sub-second-sql-on-hadoop-at-scale
 
Apache Flink: Past, Present and Future
Apache Flink: Past, Present and FutureApache Flink: Past, Present and Future
Apache Flink: Past, Present and Future
 
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 

Plus de DataWorks Summit

HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
DataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
DataWorks Summit
 

Plus de DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Dernier

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Dernier (20)

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 

Hive & HBase For Transaction Processing

  • 1. Hive & HBase For Transaction Processing Page 1 Alan Gates @alanfgates
  • 2. Agenda Page 2Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, HBase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 3. Agenda Page 3Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 4. A Brief History of Hive Page 4Hive & HBase For Transaction Processing • Initial goal was to make it easy to execute MapReduce using a familiar language: SQL – Most queries took minutes or hours – Primarily used for batch ETL jobs • Since 0.11 much has been done to support interactive and ad hoc queries – Many new features focused on improving performance: ORC and Parquet, Tez and Spark, vectorization – As of Hive 0.14 (November 2014) TPC-DS query 3 (star-join, group, order, limit) using ORC, Tez, and vectorization finishes in 9s for 200GB scale and 32s for 30TB scale. – Still have ~2-5 second minimum for all queries • Ongoing performance work with goal of reaching sub-second response time – Continued investment in vectorization – LLAP – Using Apache HBase for metastore LLAP = Live Long And Process
  • 5. LLAP: Why? Page 5Hive & HBase For Transaction Processing • It is hard to be fast and flexible in Tez – When SQL session starts Tez AM spun up (first query cost) – For subsequent queries Tez containers can be – pre-allocated – fast but not flexible – allocated and released for each query – flexible but start up cost for every query • No caching of data between queries – Even if data is in OS cache much of IO cost is deserialization/vector marshaling which is not shared
  • 6. LLAP: What Page 6Hive & HBase For Transaction Processing • LLAP is a node resident daemon process – Low latency by reducing setup cost – Multi-threaded engine that runs smaller tasks for query including reads, filter and some joins – Use regular Tez tasks for larger shuffle and other operators • LLAP has In-memory columnar data cache – High throughput IO using Async IO Elevator with dedicated thread and core per disk – Low latency by providing data from in-memory (off heap) cache instead of going to HDFS – Store data in columnar format for vectorization irrespective of underlying file type – Security enforced across queries and users • Uses YARN for resource management Node LLAP Process Query Fragment LLAP In- Memory columnar cache LLAP process running a task for a query HDFS
  • 7. LLAP: What Page 7Hive & HBase For Transaction Processing Node LLAP Process HDFS Query Fragm ent LLAP In-Memory columnar cache LLAP process running read task for a query LLAP process runs on multiple nodes, accelerating Tez tasks Node Hive Query Node NodeNode Node LLAP LLAP LLAP LLAP
  • 8. LLAP: Is and Is Not Page 8Hive & HBase For Transaction Processing • It is not MPP – Data not shuffled between LLAP nodes (except in limited cases) • It is not a replacement for Tez or Spark – Configured engine still used to launch tasks for post-shuffle operations (e.g. hash joins, distributed aggregations, etc.) • It is not required, users can still use Hive without installing LLAP demons • It is a Map server, or a set of standing map tasks • It is currently under development on the llap branch • Hope to merge to master and do alpha release next month
  • 9. HBase Metastore: Why? Page 9Hive & HBase For Transaction Processing
  • 10. HBase Metastore: Why? Page 10Hive & HBase For Transaction Processing BUCKETING_COLS SD_ID BIGINT(20) BUCKET_COL_NAME VARCHAR(256) INTEGER_IDX INT(11) Indexes CDS CD_ID BIGINT(20) Indexes COLUMNS_V2 CD_ID BIGINT(20) COMMENT VARCHAR(256) COLUMN_NAME VARCHAR(128) TYPE_NAME VARCHAR(4000) INTEGER_IDX INT(11) Indexes DATABASE_PARAMS DB_ID BIGINT(20) PARAM_KEY VARCHAR(180) PARAM_VALUE VARCHAR(4000) Indexes DBS DB_ID BIGINT(20) DESC VARCHAR(4000) DB_LOCATION_URI VARCHAR(4000) NAME VARCHAR(128) OWNER_NAME VARCHAR(128) OWNER_TYPE VARCHAR(10) Indexes DB_PRIVS DB_GRANT_ID BIGINT(20) CREATE_TIME INT(11) DB_ID BIGINT(20) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) DB_PRIV VARCHAR(128) Indexes GLOBAL_PRIVS USER_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) USER_PRIV VARCHAR(128) Indexes IDXS INDEX_ID BIGINT(20) CREATE_TIME INT(11) DEFERRED_REBUILD BIT(1) INDEX_HANDLER_CLASS VARCHAR(4000) INDEX_NAME VARCHAR(128) INDEX_TBL_ID BIGINT(20) LAST_ACCESS_TIME INT(11) ORIG_TBL_ID BIGINT(20) SD_ID BIGINT(20) Indexes INDEX_PARAMS INDEX_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes NUCLEUS_TABLES CLASS_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) TYPE VARCHAR(4) OWNER VARCHAR(2) VERSION VARCHAR(20) INTERFACE_NAME VARCHAR(255) Indexes PARTITIONS PART_ID BIGINT(20) CREATE_TIME INT(11) LAST_ACCESS_TIME INT(11) PART_NAME VARCHAR(767) SD_ID BIGINT(20) TBL_ID BIGINT(20) LINK_TARGET_ID BIGINT(20) Indexes PARTITION_EVENTS PART_NAME_ID BIGINT(20) DB_NAME VARCHAR(128) EVENT_TIME BIGINT(20) EVENT_TYPE INT(11) PARTITION_NAME VARCHAR(767) TBL_NAME VARCHAR(128) Indexes PARTITION_KEYS TBL_ID BIGINT(20) PKEY_COMMENT VARCHAR(4000) PKEY_NAME VARCHAR(128) PKEY_TYPE VARCHAR(767) INTEGER_IDX INT(11) Indexes PARTITION_KEY_VALS PART_ID BIGINT(20) PART_KEY_VAL VARCHAR(256) INTEGER_IDX INT(11) Indexes PARTITION_PARAMS PART_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes PART_COL_PRIVS PART_COLUMN_GRANT_ID BIGINT(20) COLUMN_NAME VARCHAR(128) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PART_ID BIGINT(20) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) PART_COL_PRIV VARCHAR(128) Indexes PART_PRIVS PART_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PART_ID BIGINT(20) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) PART_PRIV VARCHAR(128) Indexes ROLES ROLE_ID BIGINT(20) CREATE_TIME INT(11) OWNER_NAME VARCHAR(128) ROLE_NAME VARCHAR(128) Indexes ROLE_MAP ROLE_GRANT_ID BIGINT(20) ADD_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) ROLE_ID BIGINT(20) Indexes SDS SD_ID BIGINT(20) CD_ID BIGINT(20) INPUT_FORMAT VARCHAR(4000) IS_COMPRESSED BIT(1) IS_STOREDASSUBDIRECTORIES BIT(1) LOCATION VARCHAR(4000) NUM_BUCKETS INT(11) OUTPUT_FORMAT VARCHAR(4000) SERDE_ID BIGINT(20) Indexes SD_PARAMS SD_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes SEQUENCE_TABLE SEQUENCE_NAME VARCHAR(255) NEXT_VAL BIGINT(20) Indexes SERDES SERDE_ID BIGINT(20) NAME VARCHAR(128) SLIB VARCHAR(4000) Indexes SERDE_PARAMS SERDE_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes SKEWED_COL_NAMES SD_ID BIGINT(20) SKEWED_COL_NAME VARCHAR(256) INTEGER_IDX INT(11) Indexes SKEWED_COL_VALUE_LOC_MAP SD_ID BIGINT(20) STRING_LIST_ID_KID BIGINT(20) LOCATION VARCHAR(4000) Indexes SKEWED_STRING_LIST STRING_LIST_ID BIGINT(20) Indexes SKEWED_STRING_LIST_VALUES STRING_LIST_ID BIGINT(20) STRING_LIST_VALUE VARCHAR(256) INTEGER_IDX INT(11) Indexes SKEWED_VALUES SD_ID_OID BIGINT(20) STRING_LIST_ID_EID BIGINT(20) INTEGER_IDX INT(11) Indexes SORT_COLS SD_ID BIGINT(20) COLUMN_NAME VARCHAR(128) ORDER INT(11) INTEGER_IDX INT(11) Indexes TABLE_PARAMS TBL_ID BIGINT(20) PARAM_KEY VARCHAR(256) PARAM_VALUE VARCHAR(4000) Indexes TBLS TBL_ID BIGINT(20) CREATE_TIME INT(11) DB_ID BIGINT(20) LAST_ACCESS_TIME INT(11) OWNER VARCHAR(767) RETENTION INT(11) SD_ID BIGINT(20) TBL_NAME VARCHAR(128) TBL_TYPE VARCHAR(128) VIEW_EXPANDED_TEXT MEDIUMTEXT VIEW_ORIGINAL_TEXT MEDIUMTEXT LINK_TARGET_ID BIGINT(20) Indexes TBL_COL_PRIVS TBL_COLUMN_GRANT_ID BIGINT(20) COLUMN_NAME VARCHAR(128) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) TBL_COL_PRIV VARCHAR(128) TBL_ID BIGINT(20) Indexes TBL_PRIVS TBL_GRANT_ID BIGINT(20) CREATE_TIME INT(11) GRANT_OPTION SMALLINT(6) GRANTOR VARCHAR(128) GRANTOR_TYPE VARCHAR(128) PRINCIPAL_NAME VARCHAR(128) PRINCIPAL_TYPE VARCHAR(128) TBL_PRIV VARCHAR(128) TBL_ID BIGINT(20) Indexes TAB_COL_STATS CS_ID BIGINT(20) DB_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) COLUMN_NAME VARCHAR(128) COLUMN_TYPE VARCHAR(128) TBL_ID BIGINT(20) LONG_LOW_VALUE BIGINT(20) LONG_HIGH_VALUE BIGINT(20) DOUBLE_HIGH_VALUE DOUBLE(53,4) DOUBLE_LOW_VALUE DOUBLE(53,4) BIG_DECIMAL_LOW_VALUE VARCHAR(4000) BIG_DECIMAL_HIGH_VALUE VARCHAR(4000) NUM_NULLS BIGINT(20) NUM_DISTINCTS BIGINT(20) AVG_COL_LEN DOUBLE(53,4) MAX_COL_LEN BIGINT(20) NUM_TRUES BIGINT(20) NUM_FALSES BIGINT(20) LAST_ANALYZED BIGINT(20) Indexes PART_COL_STATS CS_ID BIGINT(20) DB_NAME VARCHAR(128) TABLE_NAME VARCHAR(128) PARTITION_NAME VARCHAR(767) COLUMN_NAME VARCHAR(128) COLUMN_TYPE VARCHAR(128) PART_ID BIGINT(20) LONG_LOW_VALUE BIGINT(20) LONG_HIGH_VALUE BIGINT(20) DOUBLE_HIGH_VALUE DOUBLE(53,4) DOUBLE_LOW_VALUE DOUBLE(53,4) BIG_DECIMAL_LOW_VALUE VARCHAR(4000) BIG_DECIMAL_HIGH_VALUE VARCHAR(4000) NUM_NULLS BIGINT(20) NUM_DISTINCTS BIGINT(20) AVG_COL_LEN DOUBLE(53,4) MAX_COL_LEN BIGINT(20) NUM_TRUES BIGINT(20) NUM_FALSES BIGINT(20) LAST_ANALYZED BIGINT(20) Indexes TYPES TYPES_ID BIGINT(20) TYPE_NAME VARCHAR(128) TYPE1 VARCHAR(767) TYPE2 VARCHAR(767) Indexes TYPE_FIELDS TYPE_NAME BIGINT(20) COMMENT VARCHAR(256) FIELD_NAME VARCHAR(128) FIELD_TYPE VARCHAR(767) INTEGER_IDX INT(11) Indexes MASTER_KEYS KEY_ID INT MASTER_KEY VARCHAR(767) Indexes DELEGATION_TOKENS TOKEN_IDENT VARCHAR(767) TOKEN VARCHAR(767) Indexes VERSION VER_ID BIGINT SCHEMA_VERSION VARCHAR(127) VERSION_COMMENT VARCHAR(255) Indexes FUNCS FUNC_ID BIGINT(20) CLASS_NAME VARCHAR(4000) CREATE_TIME INT(11) DB_ID BIGINT(20) FUNC_NAME VARCHAR(128) FUNC_TYPE INT(11) OWNER_NAME VARCHAR(128) OWNER_TYPE VARCHAR(10) Indexes FUNC_RU FUNC_ID BIGINT(20) RESOURCE_TYPE INT(11) RESOURCE_URI VARCHAR(4000) INTEGER_IDX INT(11) Indexes
  • 11. HBase Metastore: Why? Page 11Hive & HBase For Transaction Processing > 700 metastore queries to plan TPC-DS query 27!!!
  • 12. HBase Metastore: Why? Page 12Hive & HBase For Transaction Processing • Object Relational Modeling is an impedance mismatch • The need to work across different DBs limits tuning opportunities • No caching of catalog objects or stats in HiveServer2 or Hive metastore • Hadoop nodes cannot contact RDBMS directly due to scale issues – Forces all planning to be done up front – Limits caching opportunities • Solution: use HBase – Can store object directly, no need to normalize – Already scales, performs, etc. – Can store additional data not stored today due to RDBMS capacity limitations – Can access the metadata from the cluster (e.g. LLAP, Tez AM)
  • 13. But... Page 13Hive & HBase For Transaction Processing • HBase does not have transactions – metastore needs them – Tephra, Omid 2 (Yahoo), others working on this • HBase is hard to administer and install – Yes, we will need to improve this – We will also need embedded option for test/POC setups to keep HBase from becoming barrier to adoption • Basically any work we need to do to HBase for this is good since it benefits all HBase users
  • 14. HBase Metastore: How Page 14Hive & HBase For Transaction Processing • HBaseStore, a new implementation of RawStore that stores data in HBase • Not default, users still free to use RDBMS • Less than 10 tables in HBase – DBS, TBLS, PARTITIONS, ... – basically one for each object type – Common partition data factored out to significantly reduce size • Layout highly optimized for SELECT and DML queries, longer operations moved into DDL (e.g. grant) • Extensive caching – Of data catalog objects for length of a query – Of aggregated stats across queries and users • On going work in hbase-metastore branch • Will alpha release with llap soon
  • 15. Agenda Page 15Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, Hbase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 16. Apache Phoenix: Putting SQL Back in NoSQL Page 16Hive & HBase For Transaction Processing • SQL layer on top of HBase • Originally oriented toward transaction processing • Moving to add more analytics type operators – Adding multiple join implementations – Requests for OLAP functions (PHOENIX-154) • Working on adding transactions (PHOENIX-1674) • Moving to Apache Calcite for optimization (PHOENIX-1488)
  • 17. Agenda Page 17Hive & HBase For Transaction Processing • Our goal – Combine Apache Hive, HBase, Phoenix, and Calcite to build a single data store that can be used for analytics and transaction processing • But before we get to that we need to consider – Some things happening in Hive – Some things happening in Phoenix
  • 18. What If? Page 18Hive & HBase For Transaction Processing • We could share one O/JDBC driver? • We could share one SQL dialect? • Phoenix could leverage extensive analytics functionality in Hive without re-inventing it • Users could access their transactional and analytics data in single SQL operations?
  • 19. How? Page 19Hive & HBase For Transaction Processing • Insight #1: LLAP is a storage plus operations server for Hive; we can swap it out for other implementations • Insight #2: Tez and Spark can do post-shuffle operations (hash join, etc.) with LLAP or HBase • Insight #3: Calcite (used by both Hive and Phoenix) is built specifically to integrate disparate data storage systems
  • 20. Vision Page 20Hive & HBase For Transaction Processing • User picks storage location for table in create table (LLAP or HBase) • Transactions more efficient in HBase tables but work in both • Analytics more efficient in LLAP tables but work in both • Queries that require shuffle use Tez or Spark for post shuffle operators HDFS JDBC Server Node Node HBase LLAP Query Query Query Calcite used for planning Phoenix used for execution
  • 21. Hurdles Page 21Hive & HBase For Transaction Processing • Need to integrate types/data representation • Need to integrate transaction management • Work to do in Calcite to optimize transactional queries well