SlideShare une entreprise Scribd logo
1  sur  20
Artimon
                  Mathias Herberts - @herberts




Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
Arkéa Real Time Information Monitoring
Scalable metrics collection and analysis framework




▪ Collects metrics called 'variable instances'
▪ Dynamic discovery, (almost) no conf needed
▪ Rich analysis library
▪ Fits IT and business needs
▪ Adapts to third party metrics
▪ Uses Flume and Kafka for transport
What's in a variable instance?

          name{label0=value0,label1=value1,...}


▪ name is the name of the variable
   linux.proc.diskstats.reads.ms
   hadoop.jobtracker.maps_completed

▪ Labels are text strings, they characterize a variable instance
   Some labels are automatically set dc, rack, module, context, uuid, ...
   Others are user defined

▪ Variable instances are typed
   INTEGER, DOUBLE, BOOLEAN, STRING

▪ Variable instance values are timestamped
▪ Variable instance values are Thrift objects
Exporting metrics


▪ Metrics are exported via a Thrift service
▪ Each MonitoringContext (context=...) exposes a service
▪ MCs register their dynamic port in ZooKeeper
   /zk/artimon/contexts/xxx/ip:port:uuid

▪ MonitoringContext wrapped in a BookKeeper class
   public interface ArtimonBookKeeper {
     public void setIntegerVar(String name, final Map<String,String> labels, long value);
     public long addToIntegerVar(String name, final Map<String,String> labels, long delta);
     public Long getIntegerVar(String name, final Map<String,String> labels);
     public void removeIntegerVar(String name, final Map<String,String> labels);

       public   void setDoubleVar(String name, final Map<String,String> labels, double value);
       public   double addToDoubleVar(String name, final Map<String,String> labels, double delta);
       public   Double getDoubleVar(String name, final Map<String,String> labels);
       public   void removeDoubleVar(String name, final Map<String,String> labels);

       public void setStringVar(String name, final Map<String,String> labels, String value);
       public String getStringVar(String name, final Map<String,String> labels);
       public void removeStringVar(String name, final Map<String,String> labels);

       public void setBooleanVar(String name, final Map<String,String> labels, boolean value);
       public Boolean getBooleanVar(String name, final Map<String,String> labels);
       public void removeBooleanVar(String name, final Map<String,String> labels);
   }
Exporting metrics


▪ Thrift service returns the latest values of known instances
▪ ZooKeeper not mandatory, can use a fixed port
▪ Artimon written in Java
▪ Checklist for porting to other languages
   ▪ Thrift support

   ▪ Optional ZooKeeper support
Collecting Metrics


▪ Flume launched on every machine
▪ 'artimon' source
   artimon(hosts, contexts, vars[, polling_interval])
   eg artimon(“self”,”*”,”~.*”)

   ▪ Watches ZooKeeper for contexts to poll

   ▪ Periodically collects latest values

▪ 'artimonProxy' decorator
   artimonProxy([[port],[ttl]])

   ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
Collecting Metrics


▪ Simulated flow using flume.flow event attribute
   artimon(...) | artimonProxy(...) value("flume.flow", "artimon")...

▪ Events batched and gzipped
   ... value("flume.flow", "artimon") batch(100,100) gzip() ...

▪ Kafka sink
   kafkasink(topic, propname=value...)

   ... gzip()   < failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")")
                 ? diskFailover("-kafka-flume-artimon")
                 insistentAppend stubbornAppend insistentOpen
                 failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >;


                      ~ kafkaDFOChain
Consuming Metrics


▪ Kafka source
   kafkasource(topic, propname=value...)

▪ Custom BytesWritableEscapedSeqFileEventSink
   bwseqfile(filename[, idle[, maxage]])
   bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon");

   ▪ N archivers in a single Kafka consumer group (same groupid)
   ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables
   ▪ Can add archivers if metrics flow increases
   ▪ Ability to manipulate those metrics using Pig
Consuming Metrics


▪ In-Memory history data (VarHistoryMemStore, VHMS)
  artimonVHMSDecorator(nthreads[0],
                       bucketspan[60000],
                       bucketcount[60],
                       gc_grace_period[600000],
                       port[27847],
                       gc_period[60000],
                       get_limit[100000]) null;

  ▪ Each VHMS in its own Kafka consumer group (each gets all metrics)
  ▪ Multiple VHMS with different granularities
     60x1', 48x5', 96*15', 72*24h
  ▪ Filter to ignore some metrics for some VHMS
     artimonFilter("!~linux.proc.pid.*")
Why Kafka?


▪ Initially used tsink/rpcSource
   ▪ No ZooKeeper use for Flume (avoid flapping)
   ▪ Collector load balancing using DNS
   ▪ Worked fine for some time...

▪ But as metrics volume was increasing...
   ▪ DNS load balancing not ideal (herd effect when restarting collectors)
   ▪ Flume's push architecture got in the way
      Slowdowns not considered failures
      Had to add mechanisms for dropping metrics when congested
Why Kafka?


▪ Kafka to the rescue! Source/sink coded in less than a day
   ▪ Acts as a buffer between metrics producers and consumers
   ▪ ZooKeeper based discovery and load balancing
   ▪ Easily scalable, just add brokers

▪ Performance has increased
   ▪ Producers now push their metrics in less than 2s
   ▪ VHMS/Archivers consume at their pace with no producer slowdown
       => 1.3M metrics in ~10s


▪ Ability to go back in time when restarting a VHMS
▪ Flume still valuable, notably for DFO (collect metrics during NP)
▪ Artimon [pull] Flume [push] Kafka [pull] Flume
Analyzing Metrics


▪ Groovy library
   ▪ Talks to a VHMS to retrieve time series
   ▪ Manipulates time series, individually or in bulk

▪ Groovy scripts for monitoring
   ▪ Use the Artimon library

   ▪ IT Monitoring
   ▪ BAM (Business Activity Monitoring)

▪ Ability to generate alerts

   ▪ Each alert is an Artimon metric (archived for SLA compliance)
   ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
Analyzing Metrics


▪ Bulk time series manipulation
   ▪ Equivalence classes based on labels (same values, same class)
   ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes

          import static com.arkea.artimon.groovy.LibArtimon.*

          vhmssrc=export['vhms.60']

          dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000)

          dfvars = select(sel_isfinite(), dfvars)

          free = select(dfvars, '=linux.df.bytes.free', [:])
          capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:]))

          usage = sort(apply(op_div(), free, capacity, [], 'freespace'))

          used50   =   select(sel_lt(0.50),    usage)
          used75   =   select(sel_lt(0.25),    usage)
          used90   =   select(sel_lt(0.10),    usage)
          used95   =   select(sel_lt(0.05),    usage)

          println   'Volumes   occupied   >   50%:   '   +   used50.size()
          println   'Volumes   occupied   >   75%:   '   +   used75.size()
          println   'Volumes   occupied   >   90%:   '   +   used90.size()
          println   'Volumes   occupied   >   95%:   '   +   used95.size()

          println 'Total volumes: ' + usage.size()


                        Same script can handle any number of volumes, dynamically
Analyzing Metrics


▪ Map paradigm
  ▪ Apply a Groovy closure on n consecutive values of a time serie
     map(closure, vars, nticks, name)
     Predefined map_delta(), map_rate(), map_{min,max,mean}()
     map(map_delta(), vars, 2, '+:delta')

▪ Reduce paradigm
  ▪ Apply a Groovy closure on equivalence classes
  ▪ Generate one time serie for each equivalence class
     reduceby(closure, vars, bylabels, name, relabels)
     Predefined red_sum(), red_{min,max,mean,sd}()
     reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
Analyzing Metrics


▪ A whole lot more
   getvars      selectbylabels   relabel
   fetch        partition        fillprevious
   find         top              fillnext
   findlabels   bottom           fillvalue
   display      outliers         map
   makevar      dropOutliers     reduceby
   nticks       resample         settype
   timespan     normalize        triggerAlert
   lasttick     standardize      clearAlert
   values       sort             CDF
   targets      scalar           PDF
   getlabels    ntrim            Percentile
   dump         timetrim         sparkline
   select       apply            ...
Third Party Metrics


▪ JMX Agent
       ▪ Expose any JMX metrics as Artimon metrics
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    525762846
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    511880426
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    492037666
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    436896839
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    333034505
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163186980
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163047011
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162916713
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162704303
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162565421
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8835417
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8794654
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8793525
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8741181
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8019699
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51999885
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51991203
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51986318
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51980976
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     48008009
Third Party Metrics


▪ Flume artimonReader source
   artimonReader(context, periodicity, file0[, fileX])

   ▪ Periodically reads files containing text representation of metrics
       [timestamp] name{labels} value


   ▪ Exposes those metrics via the standard mechanism
   ▪ Simply create scripts which write those files and add them to crontab
   ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ...

       1319718601000   mysql.bytes_received{db=mysql-roller} 296493399
       1319718601000   mysql.bytes_sent{db=mysql-roller} 3655368849
       1319718601000   mysql.com_admin_commands{db=mysql-roller} 673028
       1319718601000   mysql.com_alter_db{db=mysql-roller} 0
       1319718601000   mysql.com_alter_table{db=mysql-roller} 0
       1319718601000   mysql.com_analyze{db=mysql-roller} 0
       1319718601000   mysql.com_backup_table{db=mysql-roller} 0
PostMortem Analysis


▪ Extract specific metrics from HDFS
   ▪ Simple Pig script

▪ Load extracted metrics into a local VHMS
▪ Interact with VHMS using Groovy
   ▪ Existing scripts can be ran directly if parameterized correctly

▪ Interesting use cases
   ▪ Did we respect our SLAs? Would the new SLAs be respected too?
   ▪ What happened pre/post incident?
   ▪ Would a modified alert condition have triggered an alert?
Should we OpenSource this?




  http://www.arkea.com/



         @herberts

Contenu connexe

Tendances

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time web
clkao
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
PostgresOpen
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Michaël Figuière
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with Cassandra
DataStax
 

Tendances (20)

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time web
 
2013 0928 programming by cuda
2013 0928 programming by cuda2013 0928 programming by cuda
2013 0928 programming by cuda
 
Performance Profiling in Rust
Performance Profiling in RustPerformance Profiling in Rust
Performance Profiling in Rust
 
Nginx-lua
Nginx-luaNginx-lua
Nginx-lua
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!
 
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeSCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
 
Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2
 
Workshop on command line tools - day 2
Workshop on command line tools - day 2Workshop on command line tools - day 2
Workshop on command line tools - day 2
 
A22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle HaileyA22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle Hailey
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with Cassandra
 
Workshop on command line tools - day 1
Workshop on command line tools - day 1Workshop on command line tools - day 1
Workshop on command line tools - day 1
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - Suestra
 
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
 
Lua tech talk
Lua tech talkLua tech talk
Lua tech talk
 
Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]
 
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
 
Ordered Record Collection
Ordered Record CollectionOrdered Record Collection
Ordered Record Collection
 
Top Node.js Metrics to Watch
Top Node.js Metrics to WatchTop Node.js Metrics to Watch
Top Node.js Metrics to Watch
 
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
 

En vedette

Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121
Mathias Herberts
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Modern Data Stack France
 

En vedette (11)

Dev ops Monitoring
Dev ops   MonitoringDev ops   Monitoring
Dev ops Monitoring
 
Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop Ecosystem
 
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationIoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
 
Programmation fonctionnelle
Programmation fonctionnelleProgrammation fonctionnelle
Programmation fonctionnelle
 
Scala : programmation fonctionnelle
Scala : programmation fonctionnelleScala : programmation fonctionnelle
Scala : programmation fonctionnelle
 
The Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptThe Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScript
 
Programmation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptProgrammation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScript
 
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
 
Cisco OpenSOC
Cisco OpenSOCCisco OpenSOC
Cisco OpenSOC
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
 

Similaire à Artimon - Apache Flume (incubating) NYC Meetup 20111108

Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
Deependra Ariyadewa
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
Kostas Tzoumas
 

Similaire à Artimon - Apache Flume (incubating) NYC Meetup 20111108 (20)

Kafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingKafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processing
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul Dix
 
Clojure ♥ cassandra
Clojure ♥ cassandra Clojure ♥ cassandra
Clojure ♥ cassandra
 
Realtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQRealtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQ
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!
 
Containerizing Distributed Pipes
Containerizing Distributed PipesContainerizing Distributed Pipes
Containerizing Distributed Pipes
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
 
Introduction To Apache Mesos
Introduction To Apache MesosIntroduction To Apache Mesos
Introduction To Apache Mesos
 
Solr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene EuroconSolr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene Eurocon
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applications
 
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
 
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammOSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
 
Monitoring VoIP Systems
Monitoring VoIP SystemsMonitoring VoIP Systems
Monitoring VoIP Systems
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS Lambda
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
 
Building and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosBuilding and Deploying Application to Apache Mesos
Building and Deploying Application to Apache Mesos
 
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
 
Data Pipeline at Tapad
Data Pipeline at TapadData Pipeline at Tapad
Data Pipeline at Tapad
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to Rust
 

Plus de Mathias Herberts (6)

2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
 
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
 
Big Data Tribute
Big Data TributeBig Data Tribute
Big Data Tribute
 
Hadoop Pig Syntax Card
Hadoop Pig Syntax CardHadoop Pig Syntax Card
Hadoop Pig Syntax Card
 
Hadoop Pig
Hadoop PigHadoop Pig
Hadoop Pig
 
WebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachWebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic Approach
 

Dernier

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

Artimon - Apache Flume (incubating) NYC Meetup 20111108

  • 1. Artimon Mathias Herberts - @herberts Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
  • 2. Arkéa Real Time Information Monitoring
  • 3. Scalable metrics collection and analysis framework ▪ Collects metrics called 'variable instances' ▪ Dynamic discovery, (almost) no conf needed ▪ Rich analysis library ▪ Fits IT and business needs ▪ Adapts to third party metrics ▪ Uses Flume and Kafka for transport
  • 4. What's in a variable instance? name{label0=value0,label1=value1,...} ▪ name is the name of the variable linux.proc.diskstats.reads.ms hadoop.jobtracker.maps_completed ▪ Labels are text strings, they characterize a variable instance Some labels are automatically set dc, rack, module, context, uuid, ... Others are user defined ▪ Variable instances are typed INTEGER, DOUBLE, BOOLEAN, STRING ▪ Variable instance values are timestamped ▪ Variable instance values are Thrift objects
  • 5. Exporting metrics ▪ Metrics are exported via a Thrift service ▪ Each MonitoringContext (context=...) exposes a service ▪ MCs register their dynamic port in ZooKeeper /zk/artimon/contexts/xxx/ip:port:uuid ▪ MonitoringContext wrapped in a BookKeeper class public interface ArtimonBookKeeper { public void setIntegerVar(String name, final Map<String,String> labels, long value); public long addToIntegerVar(String name, final Map<String,String> labels, long delta); public Long getIntegerVar(String name, final Map<String,String> labels); public void removeIntegerVar(String name, final Map<String,String> labels); public void setDoubleVar(String name, final Map<String,String> labels, double value); public double addToDoubleVar(String name, final Map<String,String> labels, double delta); public Double getDoubleVar(String name, final Map<String,String> labels); public void removeDoubleVar(String name, final Map<String,String> labels); public void setStringVar(String name, final Map<String,String> labels, String value); public String getStringVar(String name, final Map<String,String> labels); public void removeStringVar(String name, final Map<String,String> labels); public void setBooleanVar(String name, final Map<String,String> labels, boolean value); public Boolean getBooleanVar(String name, final Map<String,String> labels); public void removeBooleanVar(String name, final Map<String,String> labels); }
  • 6. Exporting metrics ▪ Thrift service returns the latest values of known instances ▪ ZooKeeper not mandatory, can use a fixed port ▪ Artimon written in Java ▪ Checklist for porting to other languages ▪ Thrift support ▪ Optional ZooKeeper support
  • 7. Collecting Metrics ▪ Flume launched on every machine ▪ 'artimon' source artimon(hosts, contexts, vars[, polling_interval]) eg artimon(“self”,”*”,”~.*”) ▪ Watches ZooKeeper for contexts to poll ▪ Periodically collects latest values ▪ 'artimonProxy' decorator artimonProxy([[port],[ttl]]) ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
  • 8. Collecting Metrics ▪ Simulated flow using flume.flow event attribute artimon(...) | artimonProxy(...) value("flume.flow", "artimon")... ▪ Events batched and gzipped ... value("flume.flow", "artimon") batch(100,100) gzip() ... ▪ Kafka sink kafkasink(topic, propname=value...) ... gzip() < failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") ? diskFailover("-kafka-flume-artimon") insistentAppend stubbornAppend insistentOpen failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >; ~ kafkaDFOChain
  • 9. Consuming Metrics ▪ Kafka source kafkasource(topic, propname=value...) ▪ Custom BytesWritableEscapedSeqFileEventSink bwseqfile(filename[, idle[, maxage]]) bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon"); ▪ N archivers in a single Kafka consumer group (same groupid) ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables ▪ Can add archivers if metrics flow increases ▪ Ability to manipulate those metrics using Pig
  • 10. Consuming Metrics ▪ In-Memory history data (VarHistoryMemStore, VHMS) artimonVHMSDecorator(nthreads[0], bucketspan[60000], bucketcount[60], gc_grace_period[600000], port[27847], gc_period[60000], get_limit[100000]) null; ▪ Each VHMS in its own Kafka consumer group (each gets all metrics) ▪ Multiple VHMS with different granularities 60x1', 48x5', 96*15', 72*24h ▪ Filter to ignore some metrics for some VHMS artimonFilter("!~linux.proc.pid.*")
  • 11. Why Kafka? ▪ Initially used tsink/rpcSource ▪ No ZooKeeper use for Flume (avoid flapping) ▪ Collector load balancing using DNS ▪ Worked fine for some time... ▪ But as metrics volume was increasing... ▪ DNS load balancing not ideal (herd effect when restarting collectors) ▪ Flume's push architecture got in the way Slowdowns not considered failures Had to add mechanisms for dropping metrics when congested
  • 12. Why Kafka? ▪ Kafka to the rescue! Source/sink coded in less than a day ▪ Acts as a buffer between metrics producers and consumers ▪ ZooKeeper based discovery and load balancing ▪ Easily scalable, just add brokers ▪ Performance has increased ▪ Producers now push their metrics in less than 2s ▪ VHMS/Archivers consume at their pace with no producer slowdown => 1.3M metrics in ~10s ▪ Ability to go back in time when restarting a VHMS ▪ Flume still valuable, notably for DFO (collect metrics during NP) ▪ Artimon [pull] Flume [push] Kafka [pull] Flume
  • 13. Analyzing Metrics ▪ Groovy library ▪ Talks to a VHMS to retrieve time series ▪ Manipulates time series, individually or in bulk ▪ Groovy scripts for monitoring ▪ Use the Artimon library ▪ IT Monitoring ▪ BAM (Business Activity Monitoring) ▪ Ability to generate alerts ▪ Each alert is an Artimon metric (archived for SLA compliance) ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
  • 14. Analyzing Metrics ▪ Bulk time series manipulation ▪ Equivalence classes based on labels (same values, same class) ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes import static com.arkea.artimon.groovy.LibArtimon.* vhmssrc=export['vhms.60'] dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000) dfvars = select(sel_isfinite(), dfvars) free = select(dfvars, '=linux.df.bytes.free', [:]) capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:])) usage = sort(apply(op_div(), free, capacity, [], 'freespace')) used50 = select(sel_lt(0.50), usage) used75 = select(sel_lt(0.25), usage) used90 = select(sel_lt(0.10), usage) used95 = select(sel_lt(0.05), usage) println 'Volumes occupied > 50%: ' + used50.size() println 'Volumes occupied > 75%: ' + used75.size() println 'Volumes occupied > 90%: ' + used90.size() println 'Volumes occupied > 95%: ' + used95.size() println 'Total volumes: ' + usage.size() Same script can handle any number of volumes, dynamically
  • 15. Analyzing Metrics ▪ Map paradigm ▪ Apply a Groovy closure on n consecutive values of a time serie map(closure, vars, nticks, name) Predefined map_delta(), map_rate(), map_{min,max,mean}() map(map_delta(), vars, 2, '+:delta') ▪ Reduce paradigm ▪ Apply a Groovy closure on equivalence classes ▪ Generate one time serie for each equivalence class reduceby(closure, vars, bylabels, name, relabels) Predefined red_sum(), red_{min,max,mean,sd}() reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
  • 16. Analyzing Metrics ▪ A whole lot more getvars selectbylabels relabel fetch partition fillprevious find top fillnext findlabels bottom fillvalue display outliers map makevar dropOutliers reduceby nticks resample settype timespan normalize triggerAlert lasttick standardize clearAlert values sort CDF targets scalar PDF getlabels ntrim Percentile dump timetrim sparkline select apply ...
  • 17. Third Party Metrics ▪ JMX Agent ▪ Expose any JMX metrics as Artimon metrics jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 525762846 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 511880426 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 492037666 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 436896839 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 333034505 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163186980 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163047011 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162916713 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162704303 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162565421 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8835417 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8794654 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8793525 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8741181 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8019699 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51999885 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51991203 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51986318 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51980976 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 48008009
  • 18. Third Party Metrics ▪ Flume artimonReader source artimonReader(context, periodicity, file0[, fileX]) ▪ Periodically reads files containing text representation of metrics [timestamp] name{labels} value ▪ Exposes those metrics via the standard mechanism ▪ Simply create scripts which write those files and add them to crontab ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ... 1319718601000 mysql.bytes_received{db=mysql-roller} 296493399 1319718601000 mysql.bytes_sent{db=mysql-roller} 3655368849 1319718601000 mysql.com_admin_commands{db=mysql-roller} 673028 1319718601000 mysql.com_alter_db{db=mysql-roller} 0 1319718601000 mysql.com_alter_table{db=mysql-roller} 0 1319718601000 mysql.com_analyze{db=mysql-roller} 0 1319718601000 mysql.com_backup_table{db=mysql-roller} 0
  • 19. PostMortem Analysis ▪ Extract specific metrics from HDFS ▪ Simple Pig script ▪ Load extracted metrics into a local VHMS ▪ Interact with VHMS using Groovy ▪ Existing scripts can be ran directly if parameterized correctly ▪ Interesting use cases ▪ Did we respect our SLAs? Would the new SLAs be respected too? ▪ What happened pre/post incident? ▪ Would a modified alert condition have triggered an alert?
  • 20. Should we OpenSource this? http://www.arkea.com/ @herberts