SlideShare une entreprise Scribd logo
1  sur  48
Télécharger pour lire hors ligne
Elasticsearch for logs and metrics
(a deep dive)
Rafał Kuć and Radu Gheorghe
Sematext Group, Inc.
About us
Logsene
SPM
ES API
metrics
...
Products Services
Agenda
Index layout
Cluster layout
Per-index tuning of settings and mappings
Hardware+OS options
Pipeline patterns
Daily indices are a good start
...
indexing, most searches
Indexing is faster in smaller indices
Cheap deletes
Search only needed indices
“Static” indices can be cached
The Black Friday problem*
* for logs. Metrics usually don’t suffer from this
Typical indexing performance graph for one shard*
* throttled so search performance remains decent
At this point it’s better to index in a new shard
Typically 5-10GB, YMMV
INDEX
Y U NO AS FAST
INDEX
Y U NO AS FAST
more merges
more expensive
(+uncached) searches
Mostly because
Rotate by size*
* use Field Stats for queries or rely on query cache:
https://github.com/elastic/kibana/issues/6644
Aliases; Rollover Index API*
* 5.0 feature
Slicing data by time
For spiky ingestion, use size-based indices
Make sure you rotate before the performance drop
(test on one node to get that limit)
Multi tier architecture (aka hot/cold)
Client
Client
Client
Data
Data
Data
...
Data
Data
Data
Master
Master
Master
We can optimize data nodes layer
Ingest
Ingest
Ingest
Multi tier architecture (aka hot/cold)
logs_2016.11.07
indexing
es_hot_1 es_cold_1 es_cold_2
Multi tier architecture (aka hot/cold)
logs_2016.11.07
logs_2016.11.08
indexing
m
ove
es_hot_1 es_cold_1 es_cold_2
curl -XPUT localhost:9200/logs_2016.11.07/_settings -d '{
"index.routing.allocation.exclude.tag" : "hot",
"index.routing.allocation.include.tag": "cold"
}'
Multi tier architecture (aka hot/cold)
logs_2016.11.08 logs_2016.11.07
indexing
es_hot_1 es_cold_1 es_cold_2
Multi tier architecture (aka hot/cold)
logs_2016.11.11
logs_2016.11.07
logs_2016.11.09
logs_2016.11.08
logs_2016.11.10
indexing, most searches long running searches
good CPU, best possible IO heap, IO for backup/replication and stats
es_hot_1 es_cold_1 es_cold_2
SSD or RAID0 for spinning
Hot - cold architecture summary
Costs optimization - different hardware for different tier
Performance - above + fewer shards, less overhead
Isolation - long running searches don't affect indexing
Elasticsearch high availability & fault tolerance
Dedicated masters is a must
discovery.zen.minimum_master_nodes = N/2 + 1
Keep your indices balanced
not balanced cluster can lead to instability
Balanced primaries are also good
helps with backups, moving to cold tier, etc
total_shards_per_node is your friend
Elasticsearch high availability & fault tolerance
When in AWS - spread between availability zones
bin/elasticsearch -Enode.attr.zone=zoneA
cluster.routing.allocation.awareness.attributes: zone
We need headroom for spikes
leave at least 20 - 30% for indexing & search spikes
Large machines with many shards?
look out for GC - many clusters died because of that
consider running smaller ES instances but more
Which settings to tune
Merges → most indexing time
Refreshes → check refresh_interval
Flushes → normally OK with ES defaults
Relaxing the merge policy
Less merges ⇒ faster indexing/lower CPU while indexing
Slower searches, but:
- there’s more spare CPU
- aggregations aren’t as affected, and they are typically the bottleneck
especially for metrics
More open files (keep an eye on them!)
Increase index.merge.policy.segments_per_tier ⇒ more segments, less merges
Increase max_merge_at_once, too, but not as much ⇒ reduced spikes
Reduce max_merged_segment ⇒ no more huge merges, but more small ones
And even more settings
Refresh interval (index.refresh_interval)*
- 1s -> baseline indexing throughput
- 5s -> +25% to baseline throughput
- 30s -> +75% to baseline throughput
Higher indices.memory.index_buffer_size higher throughput
Lower indices.queries.cache.size for high velocity data to free up heap
Omit norms (frequencies and positions, too?)
Don't store fields if _source is used
Don't store catch-all (i.e. _all) field - data copied from other fields
* https://sematext.com/blog/2013/07/08/elasticsearch-refresh-interval-vs-indexing-performance/
Let’s dive deeper into storage
Not searches on a field, just aggregations ⇒ index=false
Not sorting/aggregating on a field ⇒ doc_values=false
Doc values can be used for retrieving (see docvalue_fields), so:
● Logs: use doc values for retrieving, exclude them from _source*
● Metrics: short fields normally ⇒ disable _source, rely on doc values
Long retention for logs? For “old” indices:
● set index.codec=best_compression
● force merge to few segments
* though you’ll lose highlighting, update API, reindex API...
Metrics: working around sparse data
Ideally, you’d have one index per metric type (what you can fetch with one call)
Combining them into one (sparse) index will impact performance (see LUCENE-7253)
One doc per metric: you’ll pay with space
Nested documents: you’ll pay with heap (bitset used for joins) and query latency
What about the OS?
Say no to swap
Disk scheduler: CFQ for HDD, deadline for SSD
Mount options: noatime, nodiratime, data=writeback, nobarrier
because strict ordering is for the weak
And hardware?
Hot tier. Typical bottlenecks: CPU and IO throughput
indexing is CPU-intensive
flushes and merges write (and read) lots of data
Cold tier: Memory (heap) and IO latency
more data here ⇒ more indices&shards ⇒ more heap
⇒ searches hit more files
many stats calls are per shard ⇒ potentially choke IO when cluster is idle
Generally:
network storage needs to be really good (esp. for cold tier)
network needs to be low latency (pings, cluster state replication)
network throughput is needed for replication/backup
AWS specifics
c3 instances work, but there’s not enough local SSD ⇒ EBS gp2 SSD*
c4 + EBS give similar performance, but cheaper
i2s are good, but expensive
d2s are better value, but can’t deal with many shards (spinning disk latency)
m4 + gp2 EBS are a good balance
gp2 → PIOPS is expensive, spinning is slow
3 IOPS/GB, but caps at 160MB/s or 10K IOPS (of up to 256kb) per drive
performance isn’t guaranteed (for gp2) ⇒ one slow drive slows RAID0
Enhanced Networking (and EBS Optimized if applicable) are a must
* And used local SSD as cache. With --cachemode writeback for async writing:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Vol
ume_Manager_Administration/lvm_cache_volume_creation.html
block
size?
The pipeline
read buffer deliver
The pipeline
read buffer deliver
Log shipper
reason #1
The pipeline
read buffer deliver
Log shipper
reason #1
Files? Sockets? Network?
What if buffer fills up?
Processing
before/after buffer?
How?
Others besides Elasticsearch?
How to buffer if $destination is down?
Overview of 6 log shippers: sematext.com/blog/2016/09/13/logstash-alternatives/
Types of buffers
buffer
application.log Log file can act as a buffer
Memory and/or disk of the log shipper
or a dedicated tool for buffering
Where to do processing
Logstash
(or Filebeat or…)
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
Where to do processing
Logstash
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
something
else
Where to do processing
Logstash
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
something
else
Outputs
need to be
in sync
Where to do processing
Logstash Kafka Logstash Elasticsearch
something
else
LogstashElasticsearch
offset
other
offset
here
here,
too
Where to do processing (syslog-ng, fluentd…)
input
here
Elasticsearch
something
else
here
Where to do processing (rsyslogd…)
input
here
here
here
Zoom into processing
Ideally, log in JSON
Otherwise, parse
For performance and maintenance
(i.e. no need to update parsing rules)
Regex-based (e.g. grok)
Easy to build rules
Rules are flexible
Slow & O(n) on # of rules
Tricks:
Move matching patterns to the top of the list
Move broad patterns to the bottom
Skip patterns including others that didn’t match
Grammar-based
(e.g. liblognorm, PatternDB)
Faster. O(1) on # of rules. References:
Logagent
Logstash
rsyslog syslog-ng
sematext.com/blog/2015/05/18/tuning-elasticsearch-indexing-pipeline-for-logs/
www.fernuni-hagen.de/imperia/md/content/rechnerarchitektur/rainer_gerhards.pdf
Back to buffers: check what happens if when they fill up
Local files: when are they rotated/archived/deleted?
TCP: what happens when connection breaks/times out?
UNIX sockets: what happens when socket blocks writes?
UDP: network buffers should handle spiky load
Check/increase
net.core.rmem_max
net.core.rmem_default
Unlike UDP&TCP,
both DGRAM and STREAM
local sockets
are reliable/blocking
Let’s talk protocols now
UDP: cool for the app, but not reliable
TCP: more reliable, but not completely
Application-level ACKs may be needed:
No failure/backpressure handling needed
App gets ACK when OS buffer gets it
⇒ no retransmit if buffer is lost*
* more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html
sender receiver
ACKs
Protocol Example shippers
HTTP Logstash, rsyslog, syslog-ng, Fluentd, Logagent
RELP rsyslog, Logstash
Beats Filebeat, Logstash
Kafka Fluentd, Filebeat, rsyslog, syslog-ng, Logstash
Wrapping up: where to log?
critical?
UDP. Increase network
buffers on destination,
so it can handle spiky
traffic
Paying with
RAM or IO?
UNIX socket. Local
shipper with memory
buffers, that can
drop data if needed
Local files. Make sure
rotation is in place or
you’ll run out of disk!
no
IO RAM
yes
Flow patterns (1 of 5)
application.log
application.log
Logstash
Logstash
Elasticsearch
Easy&flexible
Overhead
Flow patterns (2 of 5)
application.log
application.log
Filebeat
Filebeat
Elasticsearch
(with Ingest)
Light&simple
Harder to scale processing
sematext.com/blog/2016/04/25/elasticsearch-ingest-node-vs-logstash-performance/
Flow patterns (3 of 5)
Elasticsearch
files,
sockets (syslog?),
localhost TCP/UDP
Logagent
Fluentd
rsyslog
syslog-ng
Light, scales
No central control
sematext.com/blog/2016/09/13/logstash-alternatives/
Flow patterns (4 of 5)
ElasticsearchKafka
Filebeat
Logagent
Fluentd
rsyslog
syslog-ng
Good for multiple destinations
More complex
something
else
Logstash,
custom consumer
Flow patterns (5 of 5)
Thank you!
Rafał Kuć
rafal.kuc@sematext.com
@kucrafal
Radu Gheorghe
radu.gheorghe@sematext.com
@radu0gheorghe
Sematext
info@sematext.com
http://sematext.com
@sematext
Join Us! We are hiring!
http://sematext.com/jobs
Pictures
https://pixabay.com/get/e831b60920f71c22d2524518a33219c8b66ae3d11eb611429df9c77f/scuba-diving-147683_1280.png
https://pixabay.com/static/uploads/photo/2012/04/18/12/17/firewood-36866_640.png
http://i3.kym-cdn.com/entries/icons/original/000/004/006/y-u-no-guy.jpg
http://memepress.wpgoods.com/wp-content/uploads/2013/06/neutral-feel-like-a-sir-clean-l1.png

Contenu connexe

Tendances

Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedInMagnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Databricks
 

Tendances (20)

Making Structured Streaming Ready for Production
Making Structured Streaming Ready for ProductionMaking Structured Streaming Ready for Production
Making Structured Streaming Ready for Production
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
 
Elasticsearch in Netflix
Elasticsearch in NetflixElasticsearch in Netflix
Elasticsearch in Netflix
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
당근마켓 고언어 도입기, 그리고 활용법
당근마켓 고언어 도입기, 그리고 활용법당근마켓 고언어 도입기, 그리고 활용법
당근마켓 고언어 도입기, 그리고 활용법
 
Introduction to Amazon Elasticsearch Service
Introduction to  Amazon Elasticsearch ServiceIntroduction to  Amazon Elasticsearch Service
Introduction to Amazon Elasticsearch Service
 
Log management with ELK
Log management with ELKLog management with ELK
Log management with ELK
 
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedInMagnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
 
(CMP201) All You Need To Know About Auto Scaling
(CMP201) All You Need To Know About Auto Scaling(CMP201) All You Need To Know About Auto Scaling
(CMP201) All You Need To Know About Auto Scaling
 
The basics of fluentd
The basics of fluentdThe basics of fluentd
The basics of fluentd
 
Step-by-Step Introduction to Apache Flink
Step-by-Step Introduction to Apache Flink Step-by-Step Introduction to Apache Flink
Step-by-Step Introduction to Apache Flink
 
Side by Side with Elasticsearch & Solr, Part 2
Side by Side with Elasticsearch & Solr, Part 2Side by Side with Elasticsearch & Solr, Part 2
Side by Side with Elasticsearch & Solr, Part 2
 
An Introduction to Elastic Search.
An Introduction to Elastic Search.An Introduction to Elastic Search.
An Introduction to Elastic Search.
 
2020 07-30 elastic agent + ingest management
2020 07-30 elastic agent + ingest management2020 07-30 elastic agent + ingest management
2020 07-30 elastic agent + ingest management
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
 
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdf
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdfRun Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdf
Run Apache Spark on Kubernetes in Large Scale_ Challenges and Solutions-2.pdf
 
Kibana overview
Kibana overviewKibana overview
Kibana overview
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for Experimentation
 
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
Monitoring Kubernetes with Prometheus (Kubernetes Ireland, 2016)
 

En vedette

Monster Appetite PPT
Monster Appetite PPTMonster Appetite PPT
Monster Appetite PPT
Maria Hwang
 

En vedette (20)

How to Run Solr on Docker and Why
How to Run Solr on Docker and WhyHow to Run Solr on Docker and Why
How to Run Solr on Docker and Why
 
Tuning Elasticsearch Indexing Pipeline for Logs
Tuning Elasticsearch Indexing Pipeline for LogsTuning Elasticsearch Indexing Pipeline for Logs
Tuning Elasticsearch Indexing Pipeline for Logs
 
Running High Performance & Fault-tolerant Elasticsearch Clusters on Docker
Running High Performance & Fault-tolerant Elasticsearch Clusters on DockerRunning High Performance & Fault-tolerant Elasticsearch Clusters on Docker
Running High Performance & Fault-tolerant Elasticsearch Clusters on Docker
 
Top Node.js Metrics to Watch
Top Node.js Metrics to WatchTop Node.js Metrics to Watch
Top Node.js Metrics to Watch
 
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
 
From Zero to Hero - Centralized Logging with Logstash & Elasticsearch
From Zero to Hero - Centralized Logging with Logstash & ElasticsearchFrom Zero to Hero - Centralized Logging with Logstash & Elasticsearch
From Zero to Hero - Centralized Logging with Logstash & Elasticsearch
 
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)Large Scale Log Analytics with Solr (from Lucene Revolution 2015)
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)
 
Introduction to solr
Introduction to solrIntroduction to solr
Introduction to solr
 
Monitoring and Log Management for
Monitoring and Log Management forMonitoring and Log Management for
Monitoring and Log Management for
 
Building Resilient Log Aggregation Pipeline with Elasticsearch & Kafka
Building Resilient Log Aggregation Pipeline with Elasticsearch & KafkaBuilding Resilient Log Aggregation Pipeline with Elasticsearch & Kafka
Building Resilient Log Aggregation Pipeline with Elasticsearch & Kafka
 
Running High Performance and Fault Tolerant Elasticsearch Clusters on Docker
Running High Performance and Fault Tolerant Elasticsearch Clusters on DockerRunning High Performance and Fault Tolerant Elasticsearch Clusters on Docker
Running High Performance and Fault Tolerant Elasticsearch Clusters on Docker
 
Tuning Solr & Pipeline for Logs
Tuning Solr & Pipeline for LogsTuning Solr & Pipeline for Logs
Tuning Solr & Pipeline for Logs
 
Deep Dive Into Elasticsearch
Deep Dive Into ElasticsearchDeep Dive Into Elasticsearch
Deep Dive Into Elasticsearch
 
Scaling massive elastic search clusters - Rafał Kuć - Sematext
Scaling massive elastic search clusters - Rafał Kuć - SematextScaling massive elastic search clusters - Rafał Kuć - Sematext
Scaling massive elastic search clusters - Rafał Kuć - Sematext
 
Monster Appetite PPT
Monster Appetite PPTMonster Appetite PPT
Monster Appetite PPT
 
Stealing the Best Ideas from DevOps: A Guide for Sysadmins without Developers
Stealing the Best Ideas from DevOps: A Guide for Sysadmins without DevelopersStealing the Best Ideas from DevOps: A Guide for Sysadmins without Developers
Stealing the Best Ideas from DevOps: A Guide for Sysadmins without Developers
 
How Humans See Data
How Humans See DataHow Humans See Data
How Humans See Data
 
Painless container management with Container Engine and Kubernetes
Painless container management with Container Engine and KubernetesPainless container management with Container Engine and Kubernetes
Painless container management with Container Engine and Kubernetes
 
Mastering the pipeline
Mastering the pipelineMastering the pipeline
Mastering the pipeline
 
Docker Logging Webinar
Docker Logging  WebinarDocker Logging  Webinar
Docker Logging Webinar
 

Similaire à Elasticsearch for Logs & Metrics - a deep dive

Zing Database – Distributed Key-Value Database
Zing Database – Distributed Key-Value DatabaseZing Database – Distributed Key-Value Database
Zing Database – Distributed Key-Value Database
zingopen
 
Zing Database
Zing Database Zing Database
Zing Database
Long Dao
 
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Yahoo Developer Network
 

Similaire à Elasticsearch for Logs & Metrics - a deep dive (20)

Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
 
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
 
Understanding and building big data Architectures - NoSQL
Understanding and building big data Architectures - NoSQLUnderstanding and building big data Architectures - NoSQL
Understanding and building big data Architectures - NoSQL
 
Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...
Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...
Tuning Solr and its Pipeline for Logs: Presented by Rafał Kuć & Radu Gheorghe...
 
PostgreSQL: present and near future
PostgreSQL: present and near futurePostgreSQL: present and near future
PostgreSQL: present and near future
 
Cassandra in Operation
Cassandra in OperationCassandra in Operation
Cassandra in Operation
 
Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021
 
Best Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on SolarisBest Practices with PostgreSQL on Solaris
Best Practices with PostgreSQL on Solaris
 
Log analytics with ELK stack
Log analytics with ELK stackLog analytics with ELK stack
Log analytics with ELK stack
 
Zing Database – Distributed Key-Value Database
Zing Database – Distributed Key-Value DatabaseZing Database – Distributed Key-Value Database
Zing Database – Distributed Key-Value Database
 
Zing Database
Zing Database Zing Database
Zing Database
 
Gcp data engineer
Gcp data engineerGcp data engineer
Gcp data engineer
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
Red Hat Gluster Storage Performance
Red Hat Gluster Storage PerformanceRed Hat Gluster Storage Performance
Red Hat Gluster Storage Performance
 
GCP Data Engineer cheatsheet
GCP Data Engineer cheatsheetGCP Data Engineer cheatsheet
GCP Data Engineer cheatsheet
 
Introduction to NoSql
Introduction to NoSqlIntroduction to NoSql
Introduction to NoSql
 
SRV407 Deep Dive on Amazon Aurora
SRV407 Deep Dive on Amazon AuroraSRV407 Deep Dive on Amazon Aurora
SRV407 Deep Dive on Amazon Aurora
 
Configuring Aerospike - Part 2
Configuring Aerospike - Part 2 Configuring Aerospike - Part 2
Configuring Aerospike - Part 2
 
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
 

Plus de Sematext Group, Inc.

Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
Metrics, Logs, Transaction Traces, Anomaly Detection at ScaleMetrics, Logs, Transaction Traces, Anomaly Detection at Scale
Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
Sematext Group, Inc.
 

Plus de Sematext Group, Inc. (16)

Tweaking the Base Score: Lucene/Solr Similarities Explained
Tweaking the Base Score: Lucene/Solr Similarities ExplainedTweaking the Base Score: Lucene/Solr Similarities Explained
Tweaking the Base Score: Lucene/Solr Similarities Explained
 
OOPs, OOMs, oh my! Containerizing JVM apps
OOPs, OOMs, oh my! Containerizing JVM appsOOPs, OOMs, oh my! Containerizing JVM apps
OOPs, OOMs, oh my! Containerizing JVM apps
 
Is observability good for your brain?
Is observability good for your brain?Is observability good for your brain?
Is observability good for your brain?
 
Introducing log analysis to your organization
Introducing log analysis to your organization Introducing log analysis to your organization
Introducing log analysis to your organization
 
Solr Search Engine: Optimize Is (Not) Bad for You
Solr Search Engine: Optimize Is (Not) Bad for YouSolr Search Engine: Optimize Is (Not) Bad for You
Solr Search Engine: Optimize Is (Not) Bad for You
 
Solr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the UglySolr on Docker - the Good, the Bad and the Ugly
Solr on Docker - the Good, the Bad and the Ugly
 
Docker Monitoring Webinar
Docker Monitoring  WebinarDocker Monitoring  Webinar
Docker Monitoring Webinar
 
Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
Metrics, Logs, Transaction Traces, Anomaly Detection at ScaleMetrics, Logs, Transaction Traces, Anomaly Detection at Scale
Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
 
Solr Anti Patterns
Solr Anti PatternsSolr Anti Patterns
Solr Anti Patterns
 
Tuning Solr for Logs
Tuning Solr for LogsTuning Solr for Logs
Tuning Solr for Logs
 
(Elastic)search in big data
(Elastic)search in big data(Elastic)search in big data
(Elastic)search in big data
 
Side by Side with Elasticsearch and Solr
Side by Side with Elasticsearch and SolrSide by Side with Elasticsearch and Solr
Side by Side with Elasticsearch and Solr
 
Open Source Search Evolution
Open Source Search EvolutionOpen Source Search Evolution
Open Source Search Evolution
 
Elasticsearch and Solr for Logs
Elasticsearch and Solr for LogsElasticsearch and Solr for Logs
Elasticsearch and Solr for Logs
 
Introduction to Elasticsearch
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
 
Administering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud ClustersAdministering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud Clusters
 

Dernier

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Dernier (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 

Elasticsearch for Logs & Metrics - a deep dive

  • 1. Elasticsearch for logs and metrics (a deep dive) Rafał Kuć and Radu Gheorghe Sematext Group, Inc.
  • 3. Agenda Index layout Cluster layout Per-index tuning of settings and mappings Hardware+OS options Pipeline patterns
  • 4. Daily indices are a good start ... indexing, most searches Indexing is faster in smaller indices Cheap deletes Search only needed indices “Static” indices can be cached
  • 5. The Black Friday problem* * for logs. Metrics usually don’t suffer from this
  • 6. Typical indexing performance graph for one shard* * throttled so search performance remains decent At this point it’s better to index in a new shard Typically 5-10GB, YMMV
  • 7. INDEX Y U NO AS FAST
  • 8. INDEX Y U NO AS FAST more merges more expensive (+uncached) searches Mostly because
  • 9. Rotate by size* * use Field Stats for queries or rely on query cache: https://github.com/elastic/kibana/issues/6644
  • 10. Aliases; Rollover Index API* * 5.0 feature
  • 11. Slicing data by time For spiky ingestion, use size-based indices Make sure you rotate before the performance drop (test on one node to get that limit)
  • 12. Multi tier architecture (aka hot/cold) Client Client Client Data Data Data ... Data Data Data Master Master Master We can optimize data nodes layer Ingest Ingest Ingest
  • 13. Multi tier architecture (aka hot/cold) logs_2016.11.07 indexing es_hot_1 es_cold_1 es_cold_2
  • 14. Multi tier architecture (aka hot/cold) logs_2016.11.07 logs_2016.11.08 indexing m ove es_hot_1 es_cold_1 es_cold_2 curl -XPUT localhost:9200/logs_2016.11.07/_settings -d '{ "index.routing.allocation.exclude.tag" : "hot", "index.routing.allocation.include.tag": "cold" }'
  • 15. Multi tier architecture (aka hot/cold) logs_2016.11.08 logs_2016.11.07 indexing es_hot_1 es_cold_1 es_cold_2
  • 16. Multi tier architecture (aka hot/cold) logs_2016.11.11 logs_2016.11.07 logs_2016.11.09 logs_2016.11.08 logs_2016.11.10 indexing, most searches long running searches good CPU, best possible IO heap, IO for backup/replication and stats es_hot_1 es_cold_1 es_cold_2 SSD or RAID0 for spinning
  • 17. Hot - cold architecture summary Costs optimization - different hardware for different tier Performance - above + fewer shards, less overhead Isolation - long running searches don't affect indexing
  • 18. Elasticsearch high availability & fault tolerance Dedicated masters is a must discovery.zen.minimum_master_nodes = N/2 + 1 Keep your indices balanced not balanced cluster can lead to instability Balanced primaries are also good helps with backups, moving to cold tier, etc total_shards_per_node is your friend
  • 19. Elasticsearch high availability & fault tolerance When in AWS - spread between availability zones bin/elasticsearch -Enode.attr.zone=zoneA cluster.routing.allocation.awareness.attributes: zone We need headroom for spikes leave at least 20 - 30% for indexing & search spikes Large machines with many shards? look out for GC - many clusters died because of that consider running smaller ES instances but more
  • 20. Which settings to tune Merges → most indexing time Refreshes → check refresh_interval Flushes → normally OK with ES defaults
  • 21. Relaxing the merge policy Less merges ⇒ faster indexing/lower CPU while indexing Slower searches, but: - there’s more spare CPU - aggregations aren’t as affected, and they are typically the bottleneck especially for metrics More open files (keep an eye on them!) Increase index.merge.policy.segments_per_tier ⇒ more segments, less merges Increase max_merge_at_once, too, but not as much ⇒ reduced spikes Reduce max_merged_segment ⇒ no more huge merges, but more small ones
  • 22. And even more settings Refresh interval (index.refresh_interval)* - 1s -> baseline indexing throughput - 5s -> +25% to baseline throughput - 30s -> +75% to baseline throughput Higher indices.memory.index_buffer_size higher throughput Lower indices.queries.cache.size for high velocity data to free up heap Omit norms (frequencies and positions, too?) Don't store fields if _source is used Don't store catch-all (i.e. _all) field - data copied from other fields * https://sematext.com/blog/2013/07/08/elasticsearch-refresh-interval-vs-indexing-performance/
  • 23. Let’s dive deeper into storage Not searches on a field, just aggregations ⇒ index=false Not sorting/aggregating on a field ⇒ doc_values=false Doc values can be used for retrieving (see docvalue_fields), so: ● Logs: use doc values for retrieving, exclude them from _source* ● Metrics: short fields normally ⇒ disable _source, rely on doc values Long retention for logs? For “old” indices: ● set index.codec=best_compression ● force merge to few segments * though you’ll lose highlighting, update API, reindex API...
  • 24. Metrics: working around sparse data Ideally, you’d have one index per metric type (what you can fetch with one call) Combining them into one (sparse) index will impact performance (see LUCENE-7253) One doc per metric: you’ll pay with space Nested documents: you’ll pay with heap (bitset used for joins) and query latency
  • 25. What about the OS? Say no to swap Disk scheduler: CFQ for HDD, deadline for SSD Mount options: noatime, nodiratime, data=writeback, nobarrier because strict ordering is for the weak
  • 26. And hardware? Hot tier. Typical bottlenecks: CPU and IO throughput indexing is CPU-intensive flushes and merges write (and read) lots of data Cold tier: Memory (heap) and IO latency more data here ⇒ more indices&shards ⇒ more heap ⇒ searches hit more files many stats calls are per shard ⇒ potentially choke IO when cluster is idle Generally: network storage needs to be really good (esp. for cold tier) network needs to be low latency (pings, cluster state replication) network throughput is needed for replication/backup
  • 27. AWS specifics c3 instances work, but there’s not enough local SSD ⇒ EBS gp2 SSD* c4 + EBS give similar performance, but cheaper i2s are good, but expensive d2s are better value, but can’t deal with many shards (spinning disk latency) m4 + gp2 EBS are a good balance gp2 → PIOPS is expensive, spinning is slow 3 IOPS/GB, but caps at 160MB/s or 10K IOPS (of up to 256kb) per drive performance isn’t guaranteed (for gp2) ⇒ one slow drive slows RAID0 Enhanced Networking (and EBS Optimized if applicable) are a must * And used local SSD as cache. With --cachemode writeback for async writing: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Vol ume_Manager_Administration/lvm_cache_volume_creation.html block size?
  • 29. The pipeline read buffer deliver Log shipper reason #1
  • 30. The pipeline read buffer deliver Log shipper reason #1 Files? Sockets? Network? What if buffer fills up? Processing before/after buffer? How? Others besides Elasticsearch? How to buffer if $destination is down? Overview of 6 log shippers: sematext.com/blog/2016/09/13/logstash-alternatives/
  • 31. Types of buffers buffer application.log Log file can act as a buffer Memory and/or disk of the log shipper or a dedicated tool for buffering
  • 32. Where to do processing Logstash (or Filebeat or…) Buffer (Kafka/Redis) here Logstash Elasticsearch
  • 33. Where to do processing Logstash Buffer (Kafka/Redis) here Logstash Elasticsearch something else
  • 34. Where to do processing Logstash Buffer (Kafka/Redis) here Logstash Elasticsearch something else Outputs need to be in sync
  • 35. Where to do processing Logstash Kafka Logstash Elasticsearch something else LogstashElasticsearch offset other offset here here, too
  • 36. Where to do processing (syslog-ng, fluentd…) input here Elasticsearch something else here
  • 37. Where to do processing (rsyslogd…) input here here here
  • 38. Zoom into processing Ideally, log in JSON Otherwise, parse For performance and maintenance (i.e. no need to update parsing rules) Regex-based (e.g. grok) Easy to build rules Rules are flexible Slow & O(n) on # of rules Tricks: Move matching patterns to the top of the list Move broad patterns to the bottom Skip patterns including others that didn’t match Grammar-based (e.g. liblognorm, PatternDB) Faster. O(1) on # of rules. References: Logagent Logstash rsyslog syslog-ng sematext.com/blog/2015/05/18/tuning-elasticsearch-indexing-pipeline-for-logs/ www.fernuni-hagen.de/imperia/md/content/rechnerarchitektur/rainer_gerhards.pdf
  • 39. Back to buffers: check what happens if when they fill up Local files: when are they rotated/archived/deleted? TCP: what happens when connection breaks/times out? UNIX sockets: what happens when socket blocks writes? UDP: network buffers should handle spiky load Check/increase net.core.rmem_max net.core.rmem_default Unlike UDP&TCP, both DGRAM and STREAM local sockets are reliable/blocking
  • 40. Let’s talk protocols now UDP: cool for the app, but not reliable TCP: more reliable, but not completely Application-level ACKs may be needed: No failure/backpressure handling needed App gets ACK when OS buffer gets it ⇒ no retransmit if buffer is lost* * more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html sender receiver ACKs Protocol Example shippers HTTP Logstash, rsyslog, syslog-ng, Fluentd, Logagent RELP rsyslog, Logstash Beats Filebeat, Logstash Kafka Fluentd, Filebeat, rsyslog, syslog-ng, Logstash
  • 41. Wrapping up: where to log? critical? UDP. Increase network buffers on destination, so it can handle spiky traffic Paying with RAM or IO? UNIX socket. Local shipper with memory buffers, that can drop data if needed Local files. Make sure rotation is in place or you’ll run out of disk! no IO RAM yes
  • 42. Flow patterns (1 of 5) application.log application.log Logstash Logstash Elasticsearch Easy&flexible Overhead
  • 43. Flow patterns (2 of 5) application.log application.log Filebeat Filebeat Elasticsearch (with Ingest) Light&simple Harder to scale processing sematext.com/blog/2016/04/25/elasticsearch-ingest-node-vs-logstash-performance/
  • 44. Flow patterns (3 of 5) Elasticsearch files, sockets (syslog?), localhost TCP/UDP Logagent Fluentd rsyslog syslog-ng Light, scales No central control sematext.com/blog/2016/09/13/logstash-alternatives/
  • 45. Flow patterns (4 of 5) ElasticsearchKafka Filebeat Logagent Fluentd rsyslog syslog-ng Good for multiple destinations More complex something else Logstash, custom consumer
  • 47. Thank you! Rafał Kuć rafal.kuc@sematext.com @kucrafal Radu Gheorghe radu.gheorghe@sematext.com @radu0gheorghe Sematext info@sematext.com http://sematext.com @sematext Join Us! We are hiring! http://sematext.com/jobs