SlideShare a Scribd company logo
1 of 41
End to End Processing of 3.7 Million Telemetry Events per
Second Using Lambda Architecture
Saurabh Mishra Raghavendra Nandagopal
Who are We?
Saurabh Mishra
Solution Architect, Hortonworks Professional Services
@draftsperson
smishra@hortonworks.com
Raghavendra Nandagopal
Cloud Data Services Architect, Symantec
@speaktoraghav
raghavendra_nandagop@symantec.com
Cloud Platform Engineering
Symantec - Global Leader in Cyber Security
- Symantec is the world leader in providing security software for both enterprises and end users
- There are 1000’s of Enterprise Customers and more than 400 million devices (Pcs, Tablets and
Phones) that rely on Symantec to help them secure their assets from attacks, including their
data centers, emails and other sensitive data
Cloud Platform Engineering (CPE)
- Build consolidated cloud infrastructure and platform services for next generation data powered
Symantec applications
- A big data platform for batch and stream analytics integrated with both private and public cloud.
- Open source components as building blocks
- Hadoop and Openstack
- Bridge feature gaps and contribute back
Agenda
• Security Data Lake @ Global Scale
• Infrastructure At Scale
• Telemetry Data Processing Architecture
• Tunable Targets
• Performance Benchmarks
• Service Monitoring
Security Data Lake @ Global Scale
Security Data Lake @ Global Scale
Products
HDFS
Analytic Applications, Workload Management (YARN)
Stream Processing
(Storm)
Real-time Results
(HBase, ElasticSearch)
Query
(Hive, Spark SQL)
Device
Agents
Telemetry Data
Data Transfer
Threat Protection
Inbound Messaging
(Data Import, Kafka)
Physical Machine , Public Cloud, Private Cloud
Lambda Architecture
7
Speed Layer
Batch Layer
Serving
Layer
Complexity Isolation
Once Data Makes to Serving Layer via Batch , Speed Layer can be Neglected
Compensate for high latency of updates to serving layer
Fast and incremental algorithms on real time data
Batch layer eventually overrides speed layer
Random access to batch views
Updated by batch layer
Stores master dataset
Batch layer stores the master copy of the Serving layer
Computes arbitrary Views
Infrastructure At Scale
Yarn Applications in Production
- 669388 submitted
- 646632 completed
- 4640 killed
- 401 failed
Hive in Production
- 25887 Tables
- 306 Databases
- 98493 Partitions
Storm in Production
- 210 Nodes 50+ Topologies
Kafka in Production
- 80 Nodes
Hbase in Production
- 135 Nodes
ElasticSearch in Production
- 62 Nodes
Ambari
Infrastructure At Scale
Centralize
d Logging
and
Metering
Ironic
Ansible
Cloudbreak
Hybrid Data Lake
OpenStack
(Dev)
350 Nodes
Metal
(Production)
600 Nodes
Public
Cloud
(Production
)
200 Nodes
Telemetry Data Processing Architecture
Telemetry Data Processing Architecture
Telemetry Data
Collector
Telemetry Gateway
Raw
Events
Data Centers
Avro Serialized
Telemetry Avro Serialized
Telemetry
Opaque Trident
Kafka Spout
Deserialized
Objects
Transformations Functions
Transformation Topology
Trident Streams (Micro batch implementation, Exactly Once semantics)
Persist Avro
Objects
Avro Serialized
Transformed Objects
ElasticSearc
h Ingestion
Topology
Opaque
Trident
Kafka Spout
Trident
ElasticSearch
Writer Bolt
HBase Ingestion
Topology
Trident
HBase Bolt
Trident
Hive
Streamin
g
Opaque
Trident
Kafka
Spout
YARN
Hive
Ingestion
Topology
Identity Topology
Tunable Targets
Operating System
Tuning Targets
● Operating System
● Disk
● Network
Tunables
● Disable Transparent Huge Pages
echo never > defrag and > enabled
● Disable Swap
● Configure VM Cache flushing
● Configure IO Scheduler as deadline
● Disk Jbod Ext4
Mount Options- inode_readahead_blks=128,
data=writeback,noatime,nodiratime
● Network
Dual Bonded 10gbps
rx-checksumming: on, tx-checksumming: on, scatter-gather:
on, tcp-segmentation-offload: on
Kafka
Tuning Targets
● Broker
● Producer
● Consumer
Tunables
● replica.fetch.max.bytes
● socket.send.buffer.bytes
● socket.receive.buffer.bytes
● replica.socket.receive.buffer.bytes
● num.network.threads
● num.io.threads
● zookeeper.*.timeout.ms
Type
Metal 2.6 GHz
E5-2660 v3
12 * 4TB
JBOD
128 GB
DDR4 ECC
Cloud AWS: D2*8xlarge
v 0.8.2.1
Kafka
Tuning Targets
● Broker
● Producer
● Consumer
Tunables
● buffer.memory
● batch.size
● linger.ms
● compression.type
● socket.send.buffer.bytes
Type
Metal 2.6 GHz
E5-2660 v3
12 * 4TB
JBOD
128 GB
DDR4 ECC
Cloud AWS: D2*8xlarge
v 0.8.2.1
Kafka
Tuning Targets
● Broker
● Producer
● Consumer
Tunables
● num.consumer.fetchers
● socket.receive.buffer.bytes
● fetch.message.max.bytes
● fetch.min.bytes
Type
Metal 2.6 GHz
E5-2660 v3
12 * 4TB
JBOD
128 GB
DDR4 ECC
Cloud AWS: D2*8xlarge
v 0.8.2.1
Storm
Tuning Targets
● Nimbus
● Supervisors
● Workers and Executors
● Topology
Tunables
● Nimbus High Availability - 4 Nimbus Servers
Avoid downtime and performance degradation.
● Reduce workload on Zookeeper and Decreased
topology Submission time.
storm.codedistributor.class = HDFSCodeDistributor
● topology.min.replication.count = 3
floor(number_of_nimbus_hosts/2 + 1)
● max.replication.wait.time.sec = -1
● code.sync.freq.secs = 2 mins
● storm.messaging.netty.buffer_size = 10 mb
● nimbus.thrift.threads = 256
Type
Metal 2.6 GHz
E5-2660 v3
2 * 500GB
SSD
256 GB
DDR4 ECC
Cloud AWS: r3*8xlarge
v 0.10.0.2.4
Storm
Tuning Targets
● Nimbus
● Supervisors
● Workers and Executors
● Topology
Tunables
● Use supervisord to control Supervisors
● Supervisor.slots.ports = Min (No of HT Cores ,
TotalMem of Server/Worker heap size)
● Supervisor.childopts = -Xms4096m -Xmx4096m -
verbose:gc-
Xloggc:/var/log/storm/supervisor_%ID%_gc.log
Type
Metal 2.6 GHz
E5-2660 v3
2 * 500GB
SSD
256 GB
DDR4 ECC
Cloud AWS: r3*8xlarge
v 0.10.0.2.4
Storm
Tuning Targets
● Nimbus
● Supervisors
● Workers and Executors
● Topology
Tunables
Rule of Thumb! - Use Case of Storm – Telemetry
Processing
● CPU bound tasks 1 Executor Per worker
● IO Bound tasks 8 Executors Per worker,
● Fixed the JVM Memory for each Worker Based on
Fetch Size of Kafka Trident Spout and Split Size of
Bolt
-Xms8g -Xmx8g -XX:MaxDirectMemorySize=2048m
-XX:NewSize=2g -XX:MaxNewSize=2g
-XX:+UseParNewGC -XX:MaxTenuringThreshold=2 -XX:SurvivorRatio=8 -
XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=32768 -
XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -
XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=80 -
XX:+UseCMSInitiatingOccupancyOnly -XX:-CMSConcurrentMTEnabled -
XX:+AlwaysPreTouch
Type
Metal 2.6 GHz
E5-2660 v3
2 * 500GB
SSD
256 GB
DDR4 ECC
Cloud AWS: r3*8xlarge
v 0.10.0.2.4
Storm
Tuning Targets
● Nimbus
● Supervisors
● Workers and Executors
● Topology
Tunables
● Topology.optimize = true
● Topology.message.timeout.secs = 110
● Topology.max.spout.pending = 3
● Remove Topology.metrics.consumer.register- AMBARI-13237
Incoming queue and Outgoing queue
● Topology.transfer.buffer.size = 64 – Batch Size
● Topology.receiver.buffer.size = 16 – Queue Size
● Topology.executor.receive.buffer.size = 32768
● Topology.executor.send.buffer.size = 32768
Type
Metal 2.6 GHz
E5-2660 v3
2 * 500GB
SSD
256 GB
DDR4 ECC
Cloud AWS: r3*8xlarge
v 0.10.0.2.4
Storm
Tuning Targets
● Nimbus
● Supervisors
● Workers and Executors
● Topology
Tunables
● topology.trident.parallelism.hint = (number of worker
nodes in cluster * number cores per worker node) -
(number of acker tasks)
● Kafka.consumer.fetch.size.byte = 209715200
( 200MB - Yes! We process large batches)
● Kafka.consumer.buffer.size.byte = 209715200
● Kafka.consumer.min.fetch.byte = 100428800
Type
Metal 2.6 GHz
E5-2660 v3
2 * 500GB
SSD
256 GB
DDR4 ECC
Cloud AWS: r3*8xlarge
v 0.10.0.2.4
ZooKeeper
Tunables
● Keep data and log directories separately and on different
mounts
● Separate Zookeeper quorum of 5 Servers each for Kafka,
Storm, Hbase, HA quorum.
● Zookeeper GC Configurations
-Xms4192m -Xmx4192m -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC
-Xloggc:gc.log -XX:+PrintGCApplicationStoppedTime -
XX:+PrintGCApplicationConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -
XX:+PrintGCDetails -verbose:gc -Xloggc:/var/log/zookeeper/zookeeper_gc.log
Type
Metal 2.6 GHz
E5-2660 v3
2*400 GB
SSD
128 GB
DDR4 ECC
Cloud AWS: r3.2xlarge
v 3.4.6
Tuning Targets
● Data and Log directory
● Garbage Collection
Elasticsearch
Type
Metal 2.6 GHz
E5-2660 v3
14 * 400 GB
SSD
256 GB
DDR4 ECC
Cloud AWS: i2.4xlarge
Tunables
● bootstrap.mlockall: true
● indices.fielddata.cache.size: 25%
● threadpool.bulk.queue_size: 5000
● index.refresh_interval: 30s
● Index.memory.index_buffer_size: 10%
● index.store.type: mmapfs
● GC settings: -verbose:gc -Xloggc:/var/log/elasticsearch/elasticsearch_gc.log -
Xss256k -Djava.awt.headless=true -XX:+UseCompressedOops -XX:+UseG1GC -
XX:MaxGCPauseMillis=200 -XX:+DisableExplicitGC -XX:+PrintGCDateStamps -
XX:+PrintGCDetails -XX:+PrintGCTimeStamps -
XX:ErrorFile=/var/log/elasticsearch_err.log -XX:ParallelGCThreads=8
● Bulk api
● Client node
v 1.7.5
Tuning Targets
● Index Parameters
● Garbage Collection
Hbase
Tuning Targets
● Region server GC
● Hbase Configurations
Tunables
export
HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVE
R_OPTS -XX:+UseConcMarkSweepGC -Xmn2500m -
XX:SurvivorRatio=4 -XX:CMSInitiatingOccupancyFraction=50
-XX:+UseCMSInitiatingOccupancyOnly
-Xmx{{regionserver_heapsize}}
-Xms{{regionserver_heapsize}}
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -
XX:+PrintPromotionFailure -XX:+PrintTenuringDistribution -
XX:+PrintGCApplicationStoppedTime
-Xloggc:${HBASE_LOG_DIR}/hbase-gc-regionserver.log.`date
+'%Y%m%d%H%M'`
Type
Metal 2.6 GHz
E5-2660 v3
14 * 400 GB
SSD
256 GB
DDR4 ECC
Cloud AWS: i2.8xlarge
v 1.1.0
Hive
Tuning Targets
● Table Structure
● Partition and Bucketing
Scheme
● Orc Tuning
Tunables
● Use strings instead of binaries.
● Use Integer fields.
Type
Metal 2.6 GHz
E5-2660 v3
14 * 6TB
HDD
256 GB
DDR4 ECC
Cloud AWS: d2*8xlarge
v 1.2.1
Hive
Tuning Targets
● Table Structure
● Partition and Bucketing
Scheme
● Orc Tuning
Tunables
● Partitioning by Date Timestamp.
● Additional partitioning - Resulted in explosion of number of
partitions, small file size and inefficient ORC compression.
● Bucketing: If two tables bucket on the same column, they should
use the same number of buckets to support joining
● Sorting : Each table should optimize its sorting. The bucket
column typically should be the first sorted column.
Type
Metal 2.6 GHz
E5-2660 v3
14 * 6TB
HDD
256 GB
DDR4 ECC
Cloud AWS: d2*8xlarge
v 1.2.1
Hive
Tuning Targets
● Table Structure
● Partition and Bucketing
Scheme
● Orc Tuning
Tunables
● Table Structure , Bucketing and Partition and Sorting
Impact ORC Performance.
● ORC Stripe Size default 128MB Balanced Insert and
Query Optimized.
● ORC use ZLIB Compression. Smaller data size
improves any query.
● Predicate Push Down.
Type
Metal 2.6 GHz
E5-2660 v3
14 * 6TB
HDD
256 GB
DDR4 ECC
Cloud AWS: d2*8xlarge
v 1.2.1
No of Yarn Containers Per Query
orc.compress ZLIB high level compression (one of NONE, ZLIB, SNAPPY)
orc.compress.size 262144 number of bytes in each compression chunk
orc.stripe.size 130023424 number of bytes in each stripe
orc.row.index.stride 64,000 number of rows between index entries (must be >= 1000)
orc.create.index true whether to create row indexes
orc.bloom.filter.column
s
"file_sha2" comma separated list of column names for which bloom filter
created
orc.bloom.filter.fpp 0.05 false positive probability for bloom filter (must >0.0 and <1.0)
Hive Streaming
Tuning Targets
● Hive Metastore Stability
● Evaluate BatchSize & TxnsPerBatch
Tunables
● No Hive Shell Access only Hiveserver2.
● Multiple Hive Metastore Process
○ Compaction Metastore - 5 - 10 Compaction
thread
○ Streaming Metastore - 5 - Connection pool
● 16 GB Heap Size.
● Metastore Mysql Database Scalability.
● Maximum EPS was achieved by Increasing Batch Size
and keeping TxnPerBatch as Smaller.
Type
Metal 2.6 GHz
E5-2660 v3
14 * 6TB
HDD
256 GB
DDR4 ECC
Cloud AWS: d2*8xlarge
v 1.2.1
Performance Benchmarks
Benchmarking Suite
Kafka Producer Consumer Throughput Test
Storm Core and Trident Topologies
Standard Platform Test Suite
Hive TPC
Kafka Producer and Consumer Tests
The benchmark set contains Producer and Consumer test executing at various message size.
Producer and Consumer Together
● 100 bytes
● 1000 bytes - Average Telemetry Event Size
● 10000 bytes
● 100000 bytes
● 500000 bytes
● 1000000 bytes
Type of Tests
● Single thread no replication
● Single-thread, async 3x replication
● Single-thread, sync 3x replication
● Throughput Versus Stored Data
Ingesting 10 Telemetry Sources in Parallel
Storm Topology
The benchmark set custom topologies for processing telemetry data source transformation and ingestion
which simulates end to end use cases for real time streaming of Telemetry.
● Storm Trident HDFS Telemetry Transformation and Ingestion
● Storm Trident Hive Telemetry Ingestion
● Storm Trident Hbase Telemetry Ingestion
● Storm Trident Elasticsearch Telemetry Ingestion
Ingesting 10 Telemetry Sources in Parallel
Standard Platform Tests
TeraSort benchmark suite ( 2TB , 5TB, 10TB)
RandomWriter(Write and Sort) - 10GB of random data per node
DFS-IO Write and Read - TestDFSIO
NNBench (Write, Read, Rename and Delete)
MRBench
Data Load (Upload and Download)
TPC-DS 20TB
TPC-DS: Decision Support Performance Benchmarks
● Classic EDW Dimensional model
● Large fact tables
● Complex queries
Scale: 20TB
TPC-DS Benchmarking
Service Monitoring
Service Monitoring Architecture
Kafka Monitoring
ElasticSearch
Storm Kafka Lag
Storm Kafka Logging Collection
Thank You
Q&A

More Related Content

What's hot

Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit
 
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Amazon Web Services Korea
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
Dvir Volk
 

What's hot (20)

Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)Kafka to the Maxka - (Kafka Performance Tuning)
Kafka to the Maxka - (Kafka Performance Tuning)
 
Cassandra at eBay - Cassandra Summit 2012
Cassandra at eBay - Cassandra Summit 2012Cassandra at eBay - Cassandra Summit 2012
Cassandra at eBay - Cassandra Summit 2012
 
PostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized WorldPostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized World
 
EMR 플랫폼 기반의 Spark 워크로드 실행 최적화 방안 - 정세웅, AWS 솔루션즈 아키텍트:: AWS Summit Online Ko...
EMR 플랫폼 기반의 Spark 워크로드 실행 최적화 방안 - 정세웅, AWS 솔루션즈 아키텍트::  AWS Summit Online Ko...EMR 플랫폼 기반의 Spark 워크로드 실행 최적화 방안 - 정세웅, AWS 솔루션즈 아키텍트::  AWS Summit Online Ko...
EMR 플랫폼 기반의 Spark 워크로드 실행 최적화 방안 - 정세웅, AWS 솔루션즈 아키텍트:: AWS Summit Online Ko...
 
Elasticsearch in Netflix
Elasticsearch in NetflixElasticsearch in Netflix
Elasticsearch in Netflix
 
[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험[2018] 오픈스택 5년 운영의 경험
[2018] 오픈스택 5년 운영의 경험
 
Elasticsearch for beginners
Elasticsearch for beginnersElasticsearch for beginners
Elasticsearch for beginners
 
Time-Series Apache HBase
Time-Series Apache HBaseTime-Series Apache HBase
Time-Series Apache HBase
 
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
 Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F... Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
 
Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
 
How Netflix Tunes Amazon EC2 Instances for Performance - CMP325 - re:Invent 2017
How Netflix Tunes Amazon EC2 Instances for Performance - CMP325 - re:Invent 2017How Netflix Tunes Amazon EC2 Instances for Performance - CMP325 - re:Invent 2017
How Netflix Tunes Amazon EC2 Instances for Performance - CMP325 - re:Invent 2017
 
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
Amazon Redshift 아키텍처 및 모범사례::김민성::AWS Summit Seoul 2018
 
Deep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech Talks
Deep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech TalksDeep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech Talks
Deep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech Talks
 
Spark + S3 + R3를 이용한 데이터 분석 시스템 만들기
Spark + S3 + R3를 이용한 데이터 분석 시스템 만들기Spark + S3 + R3를 이용한 데이터 분석 시스템 만들기
Spark + S3 + R3를 이용한 데이터 분석 시스템 만들기
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
Deep Dive on Amazon Redshift
Deep Dive on Amazon RedshiftDeep Dive on Amazon Redshift
Deep Dive on Amazon Redshift
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
[215]네이버콘텐츠통계서비스소개 김기영
[215]네이버콘텐츠통계서비스소개 김기영[215]네이버콘텐츠통계서비스소개 김기영
[215]네이버콘텐츠통계서비스소개 김기영
 
[124]네이버에서 사용되는 여러가지 Data Platform, 그리고 MongoDB
[124]네이버에서 사용되는 여러가지 Data Platform, 그리고 MongoDB[124]네이버에서 사용되는 여러가지 Data Platform, 그리고 MongoDB
[124]네이버에서 사용되는 여러가지 Data Platform, 그리고 MongoDB
 
How to Automate Performance Tuning for Apache Spark
How to Automate Performance Tuning for Apache SparkHow to Automate Performance Tuning for Apache Spark
How to Automate Performance Tuning for Apache Spark
 

Viewers also liked

Lambda Architecture The Hive
Lambda Architecture The HiveLambda Architecture The Hive
Lambda Architecture The Hive
Altan Khendup
 
Hive and Apache Tez: Benchmarked at Yahoo! Scale
Hive and Apache Tez: Benchmarked at Yahoo! ScaleHive and Apache Tez: Benchmarked at Yahoo! Scale
Hive and Apache Tez: Benchmarked at Yahoo! Scale
DataWorks Summit
 

Viewers also liked (20)

Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
 
Automated Analytics at Scale
Automated Analytics at ScaleAutomated Analytics at Scale
Automated Analytics at Scale
 
In Flux Limiting for a multi-tenant logging service
In Flux Limiting for a multi-tenant logging serviceIn Flux Limiting for a multi-tenant logging service
In Flux Limiting for a multi-tenant logging service
 
Hive & HBase For Transaction Processing
Hive & HBase For Transaction ProcessingHive & HBase For Transaction Processing
Hive & HBase For Transaction Processing
 
Real Time and Big Data – It’s About Time
Real Time and Big Data – It’s About TimeReal Time and Big Data – It’s About Time
Real Time and Big Data – It’s About Time
 
ebay
ebayebay
ebay
 
Real time and reliable processing with Apache Storm
Real time and reliable processing with Apache StormReal time and reliable processing with Apache Storm
Real time and reliable processing with Apache Storm
 
Kafka at Peak Performance
Kafka at Peak PerformanceKafka at Peak Performance
Kafka at Peak Performance
 
Lambda Architecture The Hive
Lambda Architecture The HiveLambda Architecture The Hive
Lambda Architecture The Hive
 
Hive and Apache Tez: Benchmarked at Yahoo! Scale
Hive and Apache Tez: Benchmarked at Yahoo! ScaleHive and Apache Tez: Benchmarked at Yahoo! Scale
Hive and Apache Tez: Benchmarked at Yahoo! Scale
 
Putting Kafka Into Overdrive
Putting Kafka Into OverdrivePutting Kafka Into Overdrive
Putting Kafka Into Overdrive
 
Lambda-less Stream Processing @Scale in LinkedIn
Lambda-less Stream Processing @Scale in LinkedIn Lambda-less Stream Processing @Scale in LinkedIn
Lambda-less Stream Processing @Scale in LinkedIn
 
Tuning Kafka for Fun and Profit
Tuning Kafka for Fun and ProfitTuning Kafka for Fun and Profit
Tuning Kafka for Fun and Profit
 
Kafka at scale facebook israel
Kafka at scale   facebook israelKafka at scale   facebook israel
Kafka at scale facebook israel
 
Stream Processing made simple with Kafka
Stream Processing made simple with KafkaStream Processing made simple with Kafka
Stream Processing made simple with Kafka
 
The Evolution of Apache Kylin
The Evolution of Apache KylinThe Evolution of Apache Kylin
The Evolution of Apache Kylin
 
Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...
Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...
Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...
 
Streaming Processing with a Distributed Commit Log
Streaming Processing with a Distributed Commit LogStreaming Processing with a Distributed Commit Log
Streaming Processing with a Distributed Commit Log
 
Solving Performance Problems on Hadoop
Solving Performance Problems on HadoopSolving Performance Problems on Hadoop
Solving Performance Problems on Hadoop
 
Consumer offset management in Kafka
Consumer offset management in KafkaConsumer offset management in Kafka
Consumer offset management in Kafka
 

Similar to End to End Processing of 3.7 Million Telemetry Events per Second using Lambda Architecture

Similar to End to End Processing of 3.7 Million Telemetry Events per Second using Lambda Architecture (20)

Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Strata Singapore: GearpumpReal time DAG-Processing with Akka at ScaleStrata Singapore: GearpumpReal time DAG-Processing with Akka at Scale
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to Rust
 
A Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural NetworksA Dataflow Processing Chip for Training Deep Neural Networks
A Dataflow Processing Chip for Training Deep Neural Networks
 
Multitenancy: Kafka clusters for everyone at LINE
Multitenancy: Kafka clusters for everyone at LINEMultitenancy: Kafka clusters for everyone at LINE
Multitenancy: Kafka clusters for everyone at LINE
 
Developing Realtime Data Pipelines With Apache Kafka
Developing Realtime Data Pipelines With Apache KafkaDeveloping Realtime Data Pipelines With Apache Kafka
Developing Realtime Data Pipelines With Apache Kafka
 
Typesafe spark- Zalando meetup
Typesafe spark- Zalando meetupTypesafe spark- Zalando meetup
Typesafe spark- Zalando meetup
 
Stream Processing with Apache Kafka and .NET
Stream Processing with Apache Kafka and .NETStream Processing with Apache Kafka and .NET
Stream Processing with Apache Kafka and .NET
 
Eranea's solution and technology for mainframe migration / transformation : d...
Eranea's solution and technology for mainframe migration / transformation : d...Eranea's solution and technology for mainframe migration / transformation : d...
Eranea's solution and technology for mainframe migration / transformation : d...
 
CPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performanceCPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performance
 
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
 
Introduction to apache kafka
Introduction to apache kafkaIntroduction to apache kafka
Introduction to apache kafka
 
Automated Application Management with SaltStack
Automated Application Management with SaltStackAutomated Application Management with SaltStack
Automated Application Management with SaltStack
 
RedisConf17 - Doing More With Redis - Ofer Bengal and Yiftach Shoolman
RedisConf17 - Doing More With Redis - Ofer Bengal and Yiftach ShoolmanRedisConf17 - Doing More With Redis - Ofer Bengal and Yiftach Shoolman
RedisConf17 - Doing More With Redis - Ofer Bengal and Yiftach Shoolman
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015Netflix Keystone Pipeline at Samza Meetup 10-13-2015
Netflix Keystone Pipeline at Samza Meetup 10-13-2015
 
Event Driven Microservices
Event Driven MicroservicesEvent Driven Microservices
Event Driven Microservices
 
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
 
Secure lustre on openstack
Secure lustre on openstackSecure lustre on openstack
Secure lustre on openstack
 
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
 
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
 

More from DataWorks Summit/Hadoop Summit

How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 

More from DataWorks Summit/Hadoop Summit (20)

Running Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in ProductionRunning Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in Production
 
State of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache ZeppelinState of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache Zeppelin
 
Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache Ranger
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science Platform
 
Revolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and ZeppelinRevolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and Zeppelin
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSense
 
Hadoop Crash Course
Hadoop Crash CourseHadoop Crash Course
Hadoop Crash Course
 
Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Apache Spark Crash Course
Apache Spark Crash CourseApache Spark Crash Course
Apache Spark Crash Course
 
Dataflow with Apache NiFi
Dataflow with Apache NiFiDataflow with Apache NiFi
Dataflow with Apache NiFi
 
Schema Registry - Set you Data Free
Schema Registry - Set you Data FreeSchema Registry - Set you Data Free
Schema Registry - Set you Data Free
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and ML
 
How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
 
HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 

Recently uploaded

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Recently uploaded (20)

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 

End to End Processing of 3.7 Million Telemetry Events per Second using Lambda Architecture

  • 1. End to End Processing of 3.7 Million Telemetry Events per Second Using Lambda Architecture Saurabh Mishra Raghavendra Nandagopal
  • 2. Who are We? Saurabh Mishra Solution Architect, Hortonworks Professional Services @draftsperson smishra@hortonworks.com Raghavendra Nandagopal Cloud Data Services Architect, Symantec @speaktoraghav raghavendra_nandagop@symantec.com
  • 3. Cloud Platform Engineering Symantec - Global Leader in Cyber Security - Symantec is the world leader in providing security software for both enterprises and end users - There are 1000’s of Enterprise Customers and more than 400 million devices (Pcs, Tablets and Phones) that rely on Symantec to help them secure their assets from attacks, including their data centers, emails and other sensitive data Cloud Platform Engineering (CPE) - Build consolidated cloud infrastructure and platform services for next generation data powered Symantec applications - A big data platform for batch and stream analytics integrated with both private and public cloud. - Open source components as building blocks - Hadoop and Openstack - Bridge feature gaps and contribute back
  • 4. Agenda • Security Data Lake @ Global Scale • Infrastructure At Scale • Telemetry Data Processing Architecture • Tunable Targets • Performance Benchmarks • Service Monitoring
  • 5. Security Data Lake @ Global Scale
  • 6. Security Data Lake @ Global Scale Products HDFS Analytic Applications, Workload Management (YARN) Stream Processing (Storm) Real-time Results (HBase, ElasticSearch) Query (Hive, Spark SQL) Device Agents Telemetry Data Data Transfer Threat Protection Inbound Messaging (Data Import, Kafka) Physical Machine , Public Cloud, Private Cloud
  • 7. Lambda Architecture 7 Speed Layer Batch Layer Serving Layer Complexity Isolation Once Data Makes to Serving Layer via Batch , Speed Layer can be Neglected Compensate for high latency of updates to serving layer Fast and incremental algorithms on real time data Batch layer eventually overrides speed layer Random access to batch views Updated by batch layer Stores master dataset Batch layer stores the master copy of the Serving layer Computes arbitrary Views
  • 9. Yarn Applications in Production - 669388 submitted - 646632 completed - 4640 killed - 401 failed Hive in Production - 25887 Tables - 306 Databases - 98493 Partitions Storm in Production - 210 Nodes 50+ Topologies Kafka in Production - 80 Nodes Hbase in Production - 135 Nodes ElasticSearch in Production - 62 Nodes Ambari Infrastructure At Scale Centralize d Logging and Metering Ironic Ansible Cloudbreak Hybrid Data Lake OpenStack (Dev) 350 Nodes Metal (Production) 600 Nodes Public Cloud (Production ) 200 Nodes
  • 11. Telemetry Data Processing Architecture Telemetry Data Collector Telemetry Gateway Raw Events Data Centers Avro Serialized Telemetry Avro Serialized Telemetry Opaque Trident Kafka Spout Deserialized Objects Transformations Functions Transformation Topology Trident Streams (Micro batch implementation, Exactly Once semantics) Persist Avro Objects Avro Serialized Transformed Objects ElasticSearc h Ingestion Topology Opaque Trident Kafka Spout Trident ElasticSearch Writer Bolt HBase Ingestion Topology Trident HBase Bolt Trident Hive Streamin g Opaque Trident Kafka Spout YARN Hive Ingestion Topology Identity Topology
  • 13. Operating System Tuning Targets ● Operating System ● Disk ● Network Tunables ● Disable Transparent Huge Pages echo never > defrag and > enabled ● Disable Swap ● Configure VM Cache flushing ● Configure IO Scheduler as deadline ● Disk Jbod Ext4 Mount Options- inode_readahead_blks=128, data=writeback,noatime,nodiratime ● Network Dual Bonded 10gbps rx-checksumming: on, tx-checksumming: on, scatter-gather: on, tcp-segmentation-offload: on
  • 14. Kafka Tuning Targets ● Broker ● Producer ● Consumer Tunables ● replica.fetch.max.bytes ● socket.send.buffer.bytes ● socket.receive.buffer.bytes ● replica.socket.receive.buffer.bytes ● num.network.threads ● num.io.threads ● zookeeper.*.timeout.ms Type Metal 2.6 GHz E5-2660 v3 12 * 4TB JBOD 128 GB DDR4 ECC Cloud AWS: D2*8xlarge v 0.8.2.1
  • 15. Kafka Tuning Targets ● Broker ● Producer ● Consumer Tunables ● buffer.memory ● batch.size ● linger.ms ● compression.type ● socket.send.buffer.bytes Type Metal 2.6 GHz E5-2660 v3 12 * 4TB JBOD 128 GB DDR4 ECC Cloud AWS: D2*8xlarge v 0.8.2.1
  • 16. Kafka Tuning Targets ● Broker ● Producer ● Consumer Tunables ● num.consumer.fetchers ● socket.receive.buffer.bytes ● fetch.message.max.bytes ● fetch.min.bytes Type Metal 2.6 GHz E5-2660 v3 12 * 4TB JBOD 128 GB DDR4 ECC Cloud AWS: D2*8xlarge v 0.8.2.1
  • 17. Storm Tuning Targets ● Nimbus ● Supervisors ● Workers and Executors ● Topology Tunables ● Nimbus High Availability - 4 Nimbus Servers Avoid downtime and performance degradation. ● Reduce workload on Zookeeper and Decreased topology Submission time. storm.codedistributor.class = HDFSCodeDistributor ● topology.min.replication.count = 3 floor(number_of_nimbus_hosts/2 + 1) ● max.replication.wait.time.sec = -1 ● code.sync.freq.secs = 2 mins ● storm.messaging.netty.buffer_size = 10 mb ● nimbus.thrift.threads = 256 Type Metal 2.6 GHz E5-2660 v3 2 * 500GB SSD 256 GB DDR4 ECC Cloud AWS: r3*8xlarge v 0.10.0.2.4
  • 18. Storm Tuning Targets ● Nimbus ● Supervisors ● Workers and Executors ● Topology Tunables ● Use supervisord to control Supervisors ● Supervisor.slots.ports = Min (No of HT Cores , TotalMem of Server/Worker heap size) ● Supervisor.childopts = -Xms4096m -Xmx4096m - verbose:gc- Xloggc:/var/log/storm/supervisor_%ID%_gc.log Type Metal 2.6 GHz E5-2660 v3 2 * 500GB SSD 256 GB DDR4 ECC Cloud AWS: r3*8xlarge v 0.10.0.2.4
  • 19. Storm Tuning Targets ● Nimbus ● Supervisors ● Workers and Executors ● Topology Tunables Rule of Thumb! - Use Case of Storm – Telemetry Processing ● CPU bound tasks 1 Executor Per worker ● IO Bound tasks 8 Executors Per worker, ● Fixed the JVM Memory for each Worker Based on Fetch Size of Kafka Trident Spout and Split Size of Bolt -Xms8g -Xmx8g -XX:MaxDirectMemorySize=2048m -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParNewGC -XX:MaxTenuringThreshold=2 -XX:SurvivorRatio=8 - XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=32768 - XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled - XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=80 - XX:+UseCMSInitiatingOccupancyOnly -XX:-CMSConcurrentMTEnabled - XX:+AlwaysPreTouch Type Metal 2.6 GHz E5-2660 v3 2 * 500GB SSD 256 GB DDR4 ECC Cloud AWS: r3*8xlarge v 0.10.0.2.4
  • 20. Storm Tuning Targets ● Nimbus ● Supervisors ● Workers and Executors ● Topology Tunables ● Topology.optimize = true ● Topology.message.timeout.secs = 110 ● Topology.max.spout.pending = 3 ● Remove Topology.metrics.consumer.register- AMBARI-13237 Incoming queue and Outgoing queue ● Topology.transfer.buffer.size = 64 – Batch Size ● Topology.receiver.buffer.size = 16 – Queue Size ● Topology.executor.receive.buffer.size = 32768 ● Topology.executor.send.buffer.size = 32768 Type Metal 2.6 GHz E5-2660 v3 2 * 500GB SSD 256 GB DDR4 ECC Cloud AWS: r3*8xlarge v 0.10.0.2.4
  • 21. Storm Tuning Targets ● Nimbus ● Supervisors ● Workers and Executors ● Topology Tunables ● topology.trident.parallelism.hint = (number of worker nodes in cluster * number cores per worker node) - (number of acker tasks) ● Kafka.consumer.fetch.size.byte = 209715200 ( 200MB - Yes! We process large batches) ● Kafka.consumer.buffer.size.byte = 209715200 ● Kafka.consumer.min.fetch.byte = 100428800 Type Metal 2.6 GHz E5-2660 v3 2 * 500GB SSD 256 GB DDR4 ECC Cloud AWS: r3*8xlarge v 0.10.0.2.4
  • 22. ZooKeeper Tunables ● Keep data and log directories separately and on different mounts ● Separate Zookeeper quorum of 5 Servers each for Kafka, Storm, Hbase, HA quorum. ● Zookeeper GC Configurations -Xms4192m -Xmx4192m -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -Xloggc:gc.log -XX:+PrintGCApplicationStoppedTime - XX:+PrintGCApplicationConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps - XX:+PrintGCDetails -verbose:gc -Xloggc:/var/log/zookeeper/zookeeper_gc.log Type Metal 2.6 GHz E5-2660 v3 2*400 GB SSD 128 GB DDR4 ECC Cloud AWS: r3.2xlarge v 3.4.6 Tuning Targets ● Data and Log directory ● Garbage Collection
  • 23. Elasticsearch Type Metal 2.6 GHz E5-2660 v3 14 * 400 GB SSD 256 GB DDR4 ECC Cloud AWS: i2.4xlarge Tunables ● bootstrap.mlockall: true ● indices.fielddata.cache.size: 25% ● threadpool.bulk.queue_size: 5000 ● index.refresh_interval: 30s ● Index.memory.index_buffer_size: 10% ● index.store.type: mmapfs ● GC settings: -verbose:gc -Xloggc:/var/log/elasticsearch/elasticsearch_gc.log - Xss256k -Djava.awt.headless=true -XX:+UseCompressedOops -XX:+UseG1GC - XX:MaxGCPauseMillis=200 -XX:+DisableExplicitGC -XX:+PrintGCDateStamps - XX:+PrintGCDetails -XX:+PrintGCTimeStamps - XX:ErrorFile=/var/log/elasticsearch_err.log -XX:ParallelGCThreads=8 ● Bulk api ● Client node v 1.7.5 Tuning Targets ● Index Parameters ● Garbage Collection
  • 24. Hbase Tuning Targets ● Region server GC ● Hbase Configurations Tunables export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVE R_OPTS -XX:+UseConcMarkSweepGC -Xmn2500m - XX:SurvivorRatio=4 -XX:CMSInitiatingOccupancyFraction=50 -XX:+UseCMSInitiatingOccupancyOnly -Xmx{{regionserver_heapsize}} -Xms{{regionserver_heapsize}} -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps - XX:+PrintPromotionFailure -XX:+PrintTenuringDistribution - XX:+PrintGCApplicationStoppedTime -Xloggc:${HBASE_LOG_DIR}/hbase-gc-regionserver.log.`date +'%Y%m%d%H%M'` Type Metal 2.6 GHz E5-2660 v3 14 * 400 GB SSD 256 GB DDR4 ECC Cloud AWS: i2.8xlarge v 1.1.0
  • 25. Hive Tuning Targets ● Table Structure ● Partition and Bucketing Scheme ● Orc Tuning Tunables ● Use strings instead of binaries. ● Use Integer fields. Type Metal 2.6 GHz E5-2660 v3 14 * 6TB HDD 256 GB DDR4 ECC Cloud AWS: d2*8xlarge v 1.2.1
  • 26. Hive Tuning Targets ● Table Structure ● Partition and Bucketing Scheme ● Orc Tuning Tunables ● Partitioning by Date Timestamp. ● Additional partitioning - Resulted in explosion of number of partitions, small file size and inefficient ORC compression. ● Bucketing: If two tables bucket on the same column, they should use the same number of buckets to support joining ● Sorting : Each table should optimize its sorting. The bucket column typically should be the first sorted column. Type Metal 2.6 GHz E5-2660 v3 14 * 6TB HDD 256 GB DDR4 ECC Cloud AWS: d2*8xlarge v 1.2.1
  • 27. Hive Tuning Targets ● Table Structure ● Partition and Bucketing Scheme ● Orc Tuning Tunables ● Table Structure , Bucketing and Partition and Sorting Impact ORC Performance. ● ORC Stripe Size default 128MB Balanced Insert and Query Optimized. ● ORC use ZLIB Compression. Smaller data size improves any query. ● Predicate Push Down. Type Metal 2.6 GHz E5-2660 v3 14 * 6TB HDD 256 GB DDR4 ECC Cloud AWS: d2*8xlarge v 1.2.1 No of Yarn Containers Per Query orc.compress ZLIB high level compression (one of NONE, ZLIB, SNAPPY) orc.compress.size 262144 number of bytes in each compression chunk orc.stripe.size 130023424 number of bytes in each stripe orc.row.index.stride 64,000 number of rows between index entries (must be >= 1000) orc.create.index true whether to create row indexes orc.bloom.filter.column s "file_sha2" comma separated list of column names for which bloom filter created orc.bloom.filter.fpp 0.05 false positive probability for bloom filter (must >0.0 and <1.0)
  • 28. Hive Streaming Tuning Targets ● Hive Metastore Stability ● Evaluate BatchSize & TxnsPerBatch Tunables ● No Hive Shell Access only Hiveserver2. ● Multiple Hive Metastore Process ○ Compaction Metastore - 5 - 10 Compaction thread ○ Streaming Metastore - 5 - Connection pool ● 16 GB Heap Size. ● Metastore Mysql Database Scalability. ● Maximum EPS was achieved by Increasing Batch Size and keeping TxnPerBatch as Smaller. Type Metal 2.6 GHz E5-2660 v3 14 * 6TB HDD 256 GB DDR4 ECC Cloud AWS: d2*8xlarge v 1.2.1
  • 30. Benchmarking Suite Kafka Producer Consumer Throughput Test Storm Core and Trident Topologies Standard Platform Test Suite Hive TPC
  • 31. Kafka Producer and Consumer Tests The benchmark set contains Producer and Consumer test executing at various message size. Producer and Consumer Together ● 100 bytes ● 1000 bytes - Average Telemetry Event Size ● 10000 bytes ● 100000 bytes ● 500000 bytes ● 1000000 bytes Type of Tests ● Single thread no replication ● Single-thread, async 3x replication ● Single-thread, sync 3x replication ● Throughput Versus Stored Data Ingesting 10 Telemetry Sources in Parallel
  • 32. Storm Topology The benchmark set custom topologies for processing telemetry data source transformation and ingestion which simulates end to end use cases for real time streaming of Telemetry. ● Storm Trident HDFS Telemetry Transformation and Ingestion ● Storm Trident Hive Telemetry Ingestion ● Storm Trident Hbase Telemetry Ingestion ● Storm Trident Elasticsearch Telemetry Ingestion Ingesting 10 Telemetry Sources in Parallel
  • 33. Standard Platform Tests TeraSort benchmark suite ( 2TB , 5TB, 10TB) RandomWriter(Write and Sort) - 10GB of random data per node DFS-IO Write and Read - TestDFSIO NNBench (Write, Read, Rename and Delete) MRBench Data Load (Upload and Download)
  • 34. TPC-DS 20TB TPC-DS: Decision Support Performance Benchmarks ● Classic EDW Dimensional model ● Large fact tables ● Complex queries Scale: 20TB TPC-DS Benchmarking
  • 40. Storm Kafka Logging Collection