SlideShare une entreprise Scribd logo
1  sur  48
Télécharger pour lire hors ligne
Logstash-Elasticsearch-Kibana
How to manage logs
E. Witthauer
November 28, 2015
Why logging
• Debugging
Why logging
• Debugging
• Metrics
Why logging
• Debugging
• Metrics
• Monitoring
Old style
• Tail: ssh example.org > tail -f /var/log/some.log
Old style
• Tail: ssh example.org > tail -f /var/log/some.log
• Tools for multiple les: like multitail
Old style
• Tail: ssh example.org  tail -f /var/log/some.log
• Tools for multiple les: like multitail
• Run command synchron in multiple ssh sessions
Old style
• Tail: ssh example.org  tail -f /var/log/some.log
• Tools for multiple les: like multitail
• Run command synchron in multiple ssh sessions
But for more than one le/server or autmatic statistics:
Old style
• Tail: ssh example.org  tail -f /var/log/some.log
• Tools for multiple les: like multitail
• Run command synchron in multiple ssh sessions
But for more than one le/server or autmatic statistics:
Better style
Better all in one place with option to later analysis
The ELK-Stack
E lasticsearch - Searchserver for indexing
the data (NoSQL-DB)
The ELK-Stack
E lasticsearch - Searchserver for indexing
the data (NoSQL-DB)
L ogstash - Log data processor for
transform and lter the data
The ELK-Stack
E lasticsearch - Searchserver for indexing
the data (NoSQL-DB)
L ogstash - Log data processor for
transform and lter the data
K ibana - WebUI for data visualisation and
analysis (node.js based)
The infrastructure
1. Read the logs and put them into a Redis-DB
2. Read from Redis-DB, lter and put into Elasticsearch
The infrastructure
Why 2 steps?
• Logs will be read even if Elasticsearch is not active
The infrastructure
Why 2 steps?
• Logs will be read even if Elasticsearch is not active
• Monitor Redis to see how many events are there (e.g. per
second)
The infrastructure
Why 2 steps?
• Logs will be read even if Elasticsearch is not active
• Monitor Redis to see how many events are there (e.g. per
second)
• Check the event format if we have some index problems (e.g.
wrong eld value or tag)
Setup
Logstash
• Install Java (1.9)
Setup
Logstash
• Install Java (1.9)
• Download Logstash from
https://www.elastic.co/downloads/logstash
Setup
Logstash
• Install Java (1.9)
• Download Logstash from
https://www.elastic.co/downloads/logstash
• Extract the zip le
Setup
Logstash
• Install Java (1.9)
• Download Logstash from
https://www.elastic.co/downloads/logstash
• Extract the zip le
• Run it: bin/logstash -f logstash.conf (see cong le below)
Setup
Logstash
• Install Java (1.9)
• Download Logstash from
https://www.elastic.co/downloads/logstash
• Extract the zip le
• Run it: bin/logstash -f logstash.conf (see cong le below)
• Or install the deb package and run it
Setup
Redis
• Install Redis from your repository system
Setup
Elasticsearch
• Install Java (1.9) if not done yet
Setup
Elasticsearch
• Install Java (1.9) if not done yet
• Download Elasticsearch from
https://www.elastic.co/downloads/elasticsearch
Setup
Elasticsearch
• Install Java (1.9) if not done yet
• Download Elasticsearch from
https://www.elastic.co/downloads/elasticsearch
• Extract the zip le
Setup
Elasticsearch
• Install Java (1.9) if not done yet
• Download Elasticsearch from
https://www.elastic.co/downloads/elasticsearch
• Extract the zip le
• Run it: bin/elasticsearch
Setup
Elasticsearch
• Install Java (1.9) if not done yet
• Download Elasticsearch from
https://www.elastic.co/downloads/elasticsearch
• Extract the zip le
• Run it: bin/elasticsearch
• Or install the deb package and run it
Setup
Kibana
• Install Java (1.9) if not done yet
Setup
Kibana
• Install Java (1.9) if not done yet
• Download Kibana from
https://www.elastic.co/downloads/kibana
Setup
Kibana
• Install Java (1.9) if not done yet
• Download Kibana from
https://www.elastic.co/downloads/kibana
• Extract the zip le
Setup
Kibana
• Install Java (1.9) if not done yet
• Download Kibana from
https://www.elastic.co/downloads/kibana
• Extract the zip le
• Open cong/kibana.yml in editor
• Set the elasticsearch.url to point at your Elasticsearch instance
(e.g. loclhost or 1270.0.1)
• Run it: bin/kibana
• Open url http://yourhost.com:5601
Cong
Shipper
For the Shipper we create a cong le:
1 input {
2 f i l e {
3 path = / var / log / apache2 /∗ access ∗. log 
4 s t a r t _ p o s i t i o n = beginning
5 type = apache
6 sincedb_path = / opt /. sincedb_apache_access 
7 }
8 }
9 output {
10 r e d i s {
11 host =  1 2 7 . 0 . 0 . 1 
12 data_type =  l i s t 
13 key =  l o g s t a s h 
14 }
15 }
Cong
Shipper explained
input {...} Conguration for our input
le {...} Species a le input (all apache access log les)
path Path to our log les (regex)
start_position We start reading the le from the beginning
type adds a eld type with value apache to the output
sincedb_path Path to the internal database that sores the last
reading position in this le(s)
output {...} Conguration for our ouput
redis {...} Conguration for redis output
host Redis host address
data_type Specied that we store the events as a list in redis
key Name of our redis list
Cong
Indexer
For the Shipper we create a cong le:
1 input {
2 r e d i s {
3 host =  1 2 7 . 0 . 0 . 1 
4 type =  r e d i s −input 
5 data_type =  l i s t 
6 key =  l o g s t a s h 
7 }
8 }
9 f i l t e r {
10 i f [ path ] =~  access  { ANALYSE APACHE ACCESS }
11 e l s e i f [ path ] =~  e r r o r  { ANALYSE APACHE ERROR }
12 e l s e i f [ type ] ==  s y s l o g  { ANALYSE SYSLOG }
13 e l s e i f [ type ] ==  auth  { ANALYSE AUTH LOG }
14 }
15 output {
16 e l a s t i c s e a r c h { }
17 }
Cong
Indexer explained
input {...} Conguration for our input
redis {...} Conguration for redis input
host Redis host address
type adds a eld type with value redis-list to the
output
data_type Specied that we store the events as a list in redis
key Name of our redis list)
lter {...} Our lter for the dierent events (syslog, apache
error, apache access, auth)
if path|type Separate lter congurations for our events (see later)
output {...} Conguration for elasticsearch output
elasticsearch{ } Default conguration for elasticsearch (localhost,
no further conguration needed)
Cong - Indexer
Apache Access Filter
The Apache Access Filter:
1 mutate {
2 r e p l a c e = { type =  apache_access  }
3 remove_tag = [  _ g r o k p a r s e f a i l u r e  ]
4 remove_field = [  tags  ,  tag  ,  path  ]
5 }
6 grok {
7 patterns_dir = / opt / grok_patterns 
8 match = { message = %{VHOSTCOMBINEDAPACHELOG} }
9 }
10 date {
11 match = [  timestamp  , dd/MMM/ yyyy :HH:mm: ss Z ]
12 }
13 geoip {
14 source =  c l i e n t i p 
15 }
16 useragent {
17 source =  agent 
18 }
Cong - Indexer
Apache Access Filter
mutate {...} Change eld values
replace Replace value of eld type with apache_access
remove_tag List of tags to be removed
remove_eld List of eld to be removed
grok {...} Parese text and structure it
pattern_dir Path to our pattern les, if we don't use the internal
ones
match Field and pattern for matching
date {...} Analyse the timestamp eld
geoip Analyse the eld clientip with geoip (city, region,
ip, etc.)
useragent Analyse the eld agent as browser user agent (OS,
Major- and Minor-version browsername, etc.)
Cong - Indexer
Apache Error Filter
The Apache Access Filter:
1 grok {
2 patterns_dir = / opt / grok_patterns 
3 match = { message = %{APACHERERROR} }
4 }
5 m u l t i l i n e {
6 pattern = ^PHP b( Notice | Warning | Error | Fatal )b : 
7 s o u r c e =  errorMessage 
8 what =  next 
9 }
10 m u l t i l i n e {
11 pattern = ^PHP[ ]{3 ,} d+. .∗ 
12 s o u r c e =  errorMessage 
13 what =  p r e v i o u s 
14 }
15 mutate {
16 r e p l a c e = { type =  apache_error  }
17 r e p l a c e = { message = %{errorMessage } }
18 . . .
19 }
20 geoip {
21 s o u r c e =  c l i e n t I p 
22 }
23 i f [ request ] == / feed  {
24 drop {}
25 }
Cong - Indexer
Apache Error Filter
grok {...} Parese text and structure it
pattern_dir Path to our pattern les
match Field and pattern for matching
multiline{...} Detect if we have a multiline message
pattern The detection pattern
source The eld for detection
what How to handle it (next =combine with next/previous
message)
mutate {...} Change eld values
replace Replace value of eld type with apache_error
and message with value of errorMessage
geoip Analyse the eld clientip with geoip
request if the eld request has the value /feed drop it,
we don't need it anymore
Cong - Indexer
Syslog/Auth Filter
The Apache Access Filter:
1 grok {
2 match = { message = %{SYSLOGT} }
3 add_field = [  received_at  , %{@timestamp} ]
4 }
5 s y s l o g _ p r i { }}
Cong - Indexer
Syslog/Auth Filter
grok {...} Parese text and structure it
pattern_dir Path to our pattern les
match Field and pattern for matching
add_eld add an additional eld
syslog_prio {...} Handle syslog priority levels
Conclusion
• With these cong le and two running logstash instances we
have the log in elasticsearch
Conclusion
• With these cong le and two running logstash instances we
have the log in elasticsearch
• Kibana can be used for graphs and analyses
Kibana
Combined apache error entry
Kibana
Access graph
Kibana
Access cities, browser and devices
End
That's all
For more infos just search for Kibana, Logstash or Elasticsearch

Contenu connexe

Tendances

Tendances (20)

Log management with ELK
Log management with ELKLog management with ELK
Log management with ELK
 
Centralized log-management-with-elastic-stack
Centralized log-management-with-elastic-stackCentralized log-management-with-elastic-stack
Centralized log-management-with-elastic-stack
 
ELK Stack
ELK StackELK Stack
ELK Stack
 
Elk devops
Elk devopsElk devops
Elk devops
 
ELK Stack
ELK StackELK Stack
ELK Stack
 
Using Logstash, elasticsearch & kibana
Using Logstash, elasticsearch & kibanaUsing Logstash, elasticsearch & kibana
Using Logstash, elasticsearch & kibana
 
Centralized Logging System Using ELK Stack
Centralized Logging System Using ELK StackCentralized Logging System Using ELK Stack
Centralized Logging System Using ELK Stack
 
Elastic stack Presentation
Elastic stack PresentationElastic stack Presentation
Elastic stack Presentation
 
Elk with Openstack
Elk with OpenstackElk with Openstack
Elk with Openstack
 
ElasticSearch Basic Introduction
ElasticSearch Basic IntroductionElasticSearch Basic Introduction
ElasticSearch Basic Introduction
 
ELK Elasticsearch Logstash and Kibana Stack for Log Management
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementELK Elasticsearch Logstash and Kibana Stack for Log Management
ELK Elasticsearch Logstash and Kibana Stack for Log Management
 
The Basics of MongoDB
The Basics of MongoDBThe Basics of MongoDB
The Basics of MongoDB
 
Introduction to CKAN
Introduction to CKANIntroduction to CKAN
Introduction to CKAN
 
gRPC Overview
gRPC OverviewgRPC Overview
gRPC Overview
 
Working with JSON Data in PostgreSQL vs. MongoDB
Working with JSON Data in PostgreSQL vs. MongoDBWorking with JSON Data in PostgreSQL vs. MongoDB
Working with JSON Data in PostgreSQL vs. MongoDB
 
Apache Hive Hook
Apache Hive HookApache Hive Hook
Apache Hive Hook
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
 
Elasticsearch presentation 1
Elasticsearch presentation 1Elasticsearch presentation 1
Elasticsearch presentation 1
 
Log analysis using elk
Log analysis using elkLog analysis using elk
Log analysis using elk
 

Similaire à Logstash-Elasticsearch-Kibana

From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
Databricks
 
Intravert Server side processing for Cassandra
Intravert Server side processing for CassandraIntravert Server side processing for Cassandra
Intravert Server side processing for Cassandra
Edward Capriolo
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache Hadoop
Sages
 

Similaire à Logstash-Elasticsearch-Kibana (20)

Fluentd unified logging layer
Fluentd   unified logging layerFluentd   unified logging layer
Fluentd unified logging layer
 
LogStash in action
LogStash in actionLogStash in action
LogStash in action
 
Logstash
LogstashLogstash
Logstash
 
From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
From HelloWorld to Configurable and Reusable Apache Spark Applications in Sca...
 
Intravert Server side processing for Cassandra
Intravert Server side processing for CassandraIntravert Server side processing for Cassandra
Intravert Server side processing for Cassandra
 
NYC* 2013 - "Advanced Data Processing: Beyond Queries and Slices"
NYC* 2013 - "Advanced Data Processing: Beyond Queries and Slices"NYC* 2013 - "Advanced Data Processing: Beyond Queries and Slices"
NYC* 2013 - "Advanced Data Processing: Beyond Queries and Slices"
 
Web program-peformance-optimization
Web program-peformance-optimizationWeb program-peformance-optimization
Web program-peformance-optimization
 
Keeping Spark on Track: Productionizing Spark for ETL
Keeping Spark on Track: Productionizing Spark for ETLKeeping Spark on Track: Productionizing Spark for ETL
Keeping Spark on Track: Productionizing Spark for ETL
 
Lambdas puzzler - Peter Lawrey
Lambdas puzzler - Peter LawreyLambdas puzzler - Peter Lawrey
Lambdas puzzler - Peter Lawrey
 
Lambda Chops - Recipes for Simpler, More Expressive Code
Lambda Chops - Recipes for Simpler, More Expressive CodeLambda Chops - Recipes for Simpler, More Expressive Code
Lambda Chops - Recipes for Simpler, More Expressive Code
 
ELK stack at weibo.com
ELK stack at weibo.comELK stack at weibo.com
ELK stack at weibo.com
 
Wprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache HadoopWprowadzenie do technologi Big Data i Apache Hadoop
Wprowadzenie do technologi Big Data i Apache Hadoop
 
Big Data LDN 2017: Processing Fast Data With Apache Spark: the Tale of Two APIs
Big Data LDN 2017: Processing Fast Data With Apache Spark: the Tale of Two APIsBig Data LDN 2017: Processing Fast Data With Apache Spark: the Tale of Two APIs
Big Data LDN 2017: Processing Fast Data With Apache Spark: the Tale of Two APIs
 
Log Aggregation
Log AggregationLog Aggregation
Log Aggregation
 
Logs management
Logs managementLogs management
Logs management
 
Introduction to meta-programming in scala
Introduction to meta-programming in scalaIntroduction to meta-programming in scala
Introduction to meta-programming in scala
 
Rack Middleware
Rack MiddlewareRack Middleware
Rack Middleware
 
Application Logging in the 21st century - 2014.key
Application Logging in the 21st century - 2014.keyApplication Logging in the 21st century - 2014.key
Application Logging in the 21st century - 2014.key
 
Central LogFile Storage. ELK stack Elasticsearch, Logstash and Kibana.
Central LogFile Storage. ELK stack Elasticsearch, Logstash and Kibana.Central LogFile Storage. ELK stack Elasticsearch, Logstash and Kibana.
Central LogFile Storage. ELK stack Elasticsearch, Logstash and Kibana.
 
Solr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene EuroconSolr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene Eurocon
 

Dernier

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 

Logstash-Elasticsearch-Kibana

  • 1. Logstash-Elasticsearch-Kibana How to manage logs E. Witthauer November 28, 2015
  • 4. Why logging • Debugging • Metrics • Monitoring
  • 5. Old style • Tail: ssh example.org > tail -f /var/log/some.log
  • 6. Old style • Tail: ssh example.org > tail -f /var/log/some.log • Tools for multiple les: like multitail
  • 7. Old style • Tail: ssh example.org tail -f /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions
  • 8. Old style • Tail: ssh example.org tail -f /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
  • 9. Old style • Tail: ssh example.org tail -f /var/log/some.log • Tools for multiple les: like multitail • Run command synchron in multiple ssh sessions But for more than one le/server or autmatic statistics:
  • 10. Better style Better all in one place with option to later analysis
  • 11. The ELK-Stack E lasticsearch - Searchserver for indexing the data (NoSQL-DB)
  • 12. The ELK-Stack E lasticsearch - Searchserver for indexing the data (NoSQL-DB) L ogstash - Log data processor for transform and lter the data
  • 13. The ELK-Stack E lasticsearch - Searchserver for indexing the data (NoSQL-DB) L ogstash - Log data processor for transform and lter the data K ibana - WebUI for data visualisation and analysis (node.js based)
  • 14. The infrastructure 1. Read the logs and put them into a Redis-DB 2. Read from Redis-DB, lter and put into Elasticsearch
  • 15. The infrastructure Why 2 steps? • Logs will be read even if Elasticsearch is not active
  • 16. The infrastructure Why 2 steps? • Logs will be read even if Elasticsearch is not active • Monitor Redis to see how many events are there (e.g. per second)
  • 17. The infrastructure Why 2 steps? • Logs will be read even if Elasticsearch is not active • Monitor Redis to see how many events are there (e.g. per second) • Check the event format if we have some index problems (e.g. wrong eld value or tag)
  • 19. Setup Logstash • Install Java (1.9) • Download Logstash from https://www.elastic.co/downloads/logstash
  • 20. Setup Logstash • Install Java (1.9) • Download Logstash from https://www.elastic.co/downloads/logstash • Extract the zip le
  • 21. Setup Logstash • Install Java (1.9) • Download Logstash from https://www.elastic.co/downloads/logstash • Extract the zip le • Run it: bin/logstash -f logstash.conf (see cong le below)
  • 22. Setup Logstash • Install Java (1.9) • Download Logstash from https://www.elastic.co/downloads/logstash • Extract the zip le • Run it: bin/logstash -f logstash.conf (see cong le below) • Or install the deb package and run it
  • 23. Setup Redis • Install Redis from your repository system
  • 24. Setup Elasticsearch • Install Java (1.9) if not done yet
  • 25. Setup Elasticsearch • Install Java (1.9) if not done yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch
  • 26. Setup Elasticsearch • Install Java (1.9) if not done yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le
  • 27. Setup Elasticsearch • Install Java (1.9) if not done yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le • Run it: bin/elasticsearch
  • 28. Setup Elasticsearch • Install Java (1.9) if not done yet • Download Elasticsearch from https://www.elastic.co/downloads/elasticsearch • Extract the zip le • Run it: bin/elasticsearch • Or install the deb package and run it
  • 29. Setup Kibana • Install Java (1.9) if not done yet
  • 30. Setup Kibana • Install Java (1.9) if not done yet • Download Kibana from https://www.elastic.co/downloads/kibana
  • 31. Setup Kibana • Install Java (1.9) if not done yet • Download Kibana from https://www.elastic.co/downloads/kibana • Extract the zip le
  • 32. Setup Kibana • Install Java (1.9) if not done yet • Download Kibana from https://www.elastic.co/downloads/kibana • Extract the zip le • Open cong/kibana.yml in editor • Set the elasticsearch.url to point at your Elasticsearch instance (e.g. loclhost or 1270.0.1) • Run it: bin/kibana • Open url http://yourhost.com:5601
  • 33. Cong Shipper For the Shipper we create a cong le: 1 input { 2 f i l e { 3 path = / var / log / apache2 /∗ access ∗. log 4 s t a r t _ p o s i t i o n = beginning 5 type = apache 6 sincedb_path = / opt /. sincedb_apache_access 7 } 8 } 9 output { 10 r e d i s { 11 host = 1 2 7 . 0 . 0 . 1 12 data_type = l i s t 13 key = l o g s t a s h 14 } 15 }
  • 34. Cong Shipper explained input {...} Conguration for our input le {...} Species a le input (all apache access log les) path Path to our log les (regex) start_position We start reading the le from the beginning type adds a eld type with value apache to the output sincedb_path Path to the internal database that sores the last reading position in this le(s) output {...} Conguration for our ouput redis {...} Conguration for redis output host Redis host address data_type Specied that we store the events as a list in redis key Name of our redis list
  • 35. Cong Indexer For the Shipper we create a cong le: 1 input { 2 r e d i s { 3 host = 1 2 7 . 0 . 0 . 1 4 type = r e d i s −input 5 data_type = l i s t 6 key = l o g s t a s h 7 } 8 } 9 f i l t e r { 10 i f [ path ] =~ access { ANALYSE APACHE ACCESS } 11 e l s e i f [ path ] =~ e r r o r { ANALYSE APACHE ERROR } 12 e l s e i f [ type ] == s y s l o g { ANALYSE SYSLOG } 13 e l s e i f [ type ] == auth { ANALYSE AUTH LOG } 14 } 15 output { 16 e l a s t i c s e a r c h { } 17 }
  • 36. Cong Indexer explained input {...} Conguration for our input redis {...} Conguration for redis input host Redis host address type adds a eld type with value redis-list to the output data_type Specied that we store the events as a list in redis key Name of our redis list) lter {...} Our lter for the dierent events (syslog, apache error, apache access, auth) if path|type Separate lter congurations for our events (see later) output {...} Conguration for elasticsearch output elasticsearch{ } Default conguration for elasticsearch (localhost, no further conguration needed)
  • 37. Cong - Indexer Apache Access Filter The Apache Access Filter: 1 mutate { 2 r e p l a c e = { type = apache_access } 3 remove_tag = [ _ g r o k p a r s e f a i l u r e ] 4 remove_field = [ tags , tag , path ] 5 } 6 grok { 7 patterns_dir = / opt / grok_patterns 8 match = { message = %{VHOSTCOMBINEDAPACHELOG} } 9 } 10 date { 11 match = [ timestamp , dd/MMM/ yyyy :HH:mm: ss Z ] 12 } 13 geoip { 14 source = c l i e n t i p 15 } 16 useragent { 17 source = agent 18 }
  • 38. Cong - Indexer Apache Access Filter mutate {...} Change eld values replace Replace value of eld type with apache_access remove_tag List of tags to be removed remove_eld List of eld to be removed grok {...} Parese text and structure it pattern_dir Path to our pattern les, if we don't use the internal ones match Field and pattern for matching date {...} Analyse the timestamp eld geoip Analyse the eld clientip with geoip (city, region, ip, etc.) useragent Analyse the eld agent as browser user agent (OS, Major- and Minor-version browsername, etc.)
  • 39. Cong - Indexer Apache Error Filter The Apache Access Filter: 1 grok { 2 patterns_dir = / opt / grok_patterns 3 match = { message = %{APACHERERROR} } 4 } 5 m u l t i l i n e { 6 pattern = ^PHP b( Notice | Warning | Error | Fatal )b : 7 s o u r c e = errorMessage 8 what = next 9 } 10 m u l t i l i n e { 11 pattern = ^PHP[ ]{3 ,} d+. .∗ 12 s o u r c e = errorMessage 13 what = p r e v i o u s 14 } 15 mutate { 16 r e p l a c e = { type = apache_error } 17 r e p l a c e = { message = %{errorMessage } } 18 . . . 19 } 20 geoip { 21 s o u r c e = c l i e n t I p 22 } 23 i f [ request ] == / feed { 24 drop {} 25 }
  • 40. Cong - Indexer Apache Error Filter grok {...} Parese text and structure it pattern_dir Path to our pattern les match Field and pattern for matching multiline{...} Detect if we have a multiline message pattern The detection pattern source The eld for detection what How to handle it (next =combine with next/previous message) mutate {...} Change eld values replace Replace value of eld type with apache_error and message with value of errorMessage geoip Analyse the eld clientip with geoip request if the eld request has the value /feed drop it, we don't need it anymore
  • 41. Cong - Indexer Syslog/Auth Filter The Apache Access Filter: 1 grok { 2 match = { message = %{SYSLOGT} } 3 add_field = [ received_at , %{@timestamp} ] 4 } 5 s y s l o g _ p r i { }}
  • 42. Cong - Indexer Syslog/Auth Filter grok {...} Parese text and structure it pattern_dir Path to our pattern les match Field and pattern for matching add_eld add an additional eld syslog_prio {...} Handle syslog priority levels
  • 43. Conclusion • With these cong le and two running logstash instances we have the log in elasticsearch
  • 44. Conclusion • With these cong le and two running logstash instances we have the log in elasticsearch • Kibana can be used for graphs and analyses
  • 48. End That's all For more infos just search for Kibana, Logstash or Elasticsearch